By continuing to use the site you agree to our Privacy & Cookies policy

Objective structured clinical examination assessment

VOL: 103, ISSUE: 43, PAGE NO: 30-31

Dawn Brookes, MA Ed, BA, RGN

Community matron, DerbyshireCountyPCT

ABSTRACT Brookes, D. (2007) Objective Structured Clinical Examination assessment. www.nursingtimes.net

 

ABSTRACT Brookes, D. (2007) Objective Structured Clinical Examination assessment. www.nursingtimes.net

 

 

This article considers the use of Objective Structured Clinical Examination (OSCE) nurse prescribing courses. It sets out an evaluation of the system in place at the time of my involvement in OSCE assessment. The literature is used to develop the thoughts in relation to adult learning and to the use of OSCE in nursing. Marking schedules are discussed, as is the need for assessment in adult education. The whole process is examined to highlight issues later addressed by the course team.

 

 

The article explains the time pressures that markers may feel and the lack of validity in relation to other assessments that are double-marked. Where OSCEs are double-marked the process becomes time-consuming and overly expensive, as well as adding further stress for students undertaking the examination. For this study I re-mark OSCEs using videos and discuss the differences that occur. As a result of this appraisal I made recommendations, some of which were taken up by the course team.

 

 

Other recommendations relate to construct validity of OSCE and student self-assessments being introduced as part of the process. In addition, the marking schedules need to be agreed and the marking system must be clear to all involved.

 

 

 

 

Introduction

 

 

Nurse prescribing was introduced in 1994, eight years after the initial recommendations made in the Cumberlege Report (DHSS, 1986) and following the first Crown Report (Department of Health, 1989), with district nurses and health visitors training to prescribe from a very limited formulary (Courtenay and Griffiths, 2007).

 

 

This formulary, initially known as ‘Nurse Prescribers Formulary’ (NPF), is now known as ‘Nurse Prescribers Formulary for Community Practitioners’. Other nurses were later able to train as extended nurse prescribers following government agreement to recommendations by a second Crown Report (DH, 1999). This type of prescribing was initially known as ‘extended nurse prescribing’ and the formulary as ‘Nurse Prescribers Extended Formulary’. However, it is now known as ‘nurse independent prescribing’ and there is no longer a formulary limiting prescribing by these nurses, although there are some limitations relating to the prescribing of controlled drugs (Courtenay and Griffiths, 2007).

 

 

Nurse independent prescribers are still bound to prescribe within their competence and therefore should only be prescribing medicines about which they have knowledge for conditions that fall into their area of competence. For more information on the subject see Non-medical Prescribing in Health Care Practice: A Toolkit for Students and Practitioners (Brookes and Smith, 2006). This book also provides an explanation of supplementary prescribing and patient group directions.

 

 

Nurse prescribing is part of a strategy to improve patients’ access to medicines (DH, 2006) and new standards were introduced by the Nursing and Midwifery Council (NMC) last year (NMC, 2006).

 

 

While undertaking a master’s degree in education, I was involved in assessment for the extended and supplementary prescribing course for nurses, midwives and health visitors. This article forms the basis of a reflective appraisal of OSCE as a form of assessment.

 

 

The nurse groups involved were practice nurses, district nurses, nurse practitioners, specialist nurses and family planning nurses. There were 14 students in total who were studying at academic level 3 (degree level).

 

 

The assessment process included the following: 26 days’ attendance and 13 days’ clinical learning with a GP supervisor over a six-month period; achievement of clinical competencies, pass/fail; portfolio of evidence, reviewed by course leader; unseen written exams (multiple choice, short answer), overall pass of 70% required; unseen scenario question, pass mark 50%; objective structured clinical examination (OSCE), pass or fail; and completion of aclinical management plan for supplementary prescribing, critiqued by course team.

 

 

The main focus of this article is the OSCE assessment. I invigilated one of the mock OSCE ‘stations’ (see below).

 

 

 

 

Background to OSCE assessment

 

 

Traditionally the assessment requires students to rotate through a number of simulated professional tasks set up as ‘stations’. The students stay at each station for a set amount of time and then move on. They are usually marked via a detailed checklist marking schedule given to the assessor.

 

 

This form of assessment was pioneered by the medical profession (Harden and Gleeson, 1979, cited in Nicol and Freeth, 1998). It has since been adopted by other professions, including occupational therapy, physiotherapy, radiotherapy and, more recently, nursing (Nayer, 1993; Nicol and Freeth, 1998). The process has been positively evaluated by students (Anderson and Stickley, 2002).

 

 

The validity, however, is dependent on the quality of the problems presented at each station and also the design of the assessment schedule (Nicol and Freeth, 1998). Ross et al (1988) did not feel that the OSCE was appropriate for assessing nursing skills and that it did not reflect the reality of nursing practice.

 

 

It is important to remember the background to the 1980s, where nurses had stopped being assessed in clinical rooms and were being assessed in practice. This practice was criticised for being artificial and nursing assessment in practice became continuous assessment, as recommended by the English National Board for Nursing, Midwifery and Health Visiting (ENB, 1996).

 

 

This form of assessment has now too been criticised due to shorter practice placements and shorter placements in hospital settings. Other forms of assessment are being considered as a result of the nursing Fitness for Practice document issued by the then United Kingdom Central Council for Nursing and Midwifery (now the Nursing and Midwifery Council) (UKCC, 1999). This highlighted that many nurses were not considered by the profession and - in many cases by patients - to be competent on qualifying.

 

 

 

 

Prescribing education

 

 

The prescribing course is set at a minimum of level 3 (degree level) and can now be taken at most universities at level 4 (postgraduate level). The level minimum was stipulated by the Department of Health and the ENB (now NMC). They also stipulated that OSCE assessment be used to assess competence.

 

 

 

 

Aim of evaluation

 

 

The aim was to establish how students responded to OSCE and to examine the internal and external validity of the process.

 

 

 

 

Method

 

 

Evaluation involved my completing the following stages:

 

  • Assessing and marking a mock OSCE;
  • Reflecting on the process;
  • Re-examine the OSCE using videos recorded of the process and comparing the results;
  • Making recommendations to the course team.

 

 

 

Process

 

 

For the assessment at the university where I worked, two stations were used as numbers were not stipulated by government and regulatory bodies. The course team decided on two in-depth simulated consultations as opposed to lots of different stations, due to the nature of nursing and the way nurses carry out consultations.

 

 

The simulated OSCEs used were for 15 minutes. In practice consultations are shorter for practice nurses and longer for specialist nurses, district nurses and community matrons.

 

 

Fifteen minutes was chosen because the average appointment for nurses with experience of management of minor illnesses is 7-10 minutes. On the whole appointments may take longer than this and the team felt that 10 minutes was not long enough for this assessment. The average district nursing, specialist nursing and community matron assessment is between 45-90 minutes. Their assessment of people with minor illnesses may also be longer.

 

 

All the nurses undertaking a prescribing course will gain the same qualification and so it seems that the OSCE time is standardised around the majority who were, in the early years of non-medical prescribing implementation, those working in GP practices. This may disadvantage community nurses and nurse specialists undertaking the course.

 

 

A mock OSCE is commonly used by many universities to prepare students for what will be expected of them in the final. It is also used to introduce the process to those who have not experienced this form of assessment before. Many universities video the assessment in addition to having examiners present. The university where this evaluation took place included one examiner present in the room and videoing. The latter was used as a teaching tool to help students highlight areas they may need to work on. Second, because of the nature of the assessment the videos helped to resolve any discrepancies. For example, a student might think they had said something that they did not. Likewise, the assessor might miss something that could be picked up by looking at the video. The students are allowed two attempts at the final OSCE in keeping with most university policies.

 

 

As the OSCEs used by most, if not all, universities involve simulated consultations and not real ones, patients are not put at risk. If the practice supervisor considered the student to be unsafe, they would not be able to qualify. There was no second marking of OSCEs at the time of this evaluation. External moderation of a proportion of the finals was undertaken by an external examiner who has access to the marking schedule and videos.

 

 

 

 

Results

 

 

I was given the OSCE scenario and marking schedule for the next day. There were two stations, one relating to hay fever and the other to chest symptoms. I was assessing the latter station, with the aim being to test history-taking and consultation technique with special regard to negotiation skills. The instructions to the assessor are given below.

 

 

 

 

STATION: CHEST SYMPTOMS

 

 

Information for examiner

 

 

This station is designed to test history-taking and consultation technique, especially negotiating skills. If the student offers to examine the ‘patient’, ask them to mime what they wish to do with the patient remaining clothed. When they have completed it show them the card, which says ‘chest examination normal’.

 

 

If the student hesitates for long or appears stumped, intervene only to reassure and then recommence. Any prompts should be non-directive.

 

 

Box 1. Chest Station OSCE, instructions for assessor

 

 

 

 

The student is given a brief. The university used drama students, who were also given a brief, to act as patients and then the consultation is undertaken in 15 minutes. I looked at the marking schedule before assessing the next day.

 

 

On being presented with the timetable for the morning, the first thing I noticed was that there was no break scheduled and the assessment process would take three hours for 13 students. Each OSCE was timetabled for 15 minutes with no time allotted to make comments on the marking schedule, which was to be marked during the process. The marking schedule included a final criteria relating to the written notes the nurse had made and this had to be done when the student left the room. This meant taking an extra couple of minutes at the end of each session, thus making each student slightly later as time went on.

 

 

Research has demonstrated that concentration wavers after 30-60 minutes; this can be longer in older people but three hours’ intense concentration is very difficult. The exam did include the hearing and the visual senses and each student was given a brief introduction to the OSCE situation in order to steady their nerves. The videos would help if concentration wavered too much.

 

 

Results of station 1 (chest symptoms) mock OSCE. There were 10 criteria on the schedule for each student to pass. No marks or percentages are allocated to the individual areas.

 

 

 

 

Table 1. Student marks out of the 10 criteria

 

 

Schedule 10 areas

 

 

Pass

 

 

Borderline

 

 

Fail

 

 

Student 1

 

 

3

 

 

6

 

 

1

 

 

Student 2

 

 

2

 

 

5

 

 

3

 

 

Student 3

 

 

1

 

 

8

 

 

1

 

 

Student 4

 

 

2

 

 

5

 

 

3

 

 

Student 5

 

 

4

 

 

5

 

 

1

 

 

Student 6

 

 

5

 

 

3

 

 

2

 

 

Student 7

 

 

5

 

 

5

 

 

0

 

 

Student 8

 

 

6

 

 

4

 

 

0

 

 

Student 9

 

 

4

 

 

4

 

 

2

 

 

Student 10

 

 

6

 

 

2

 

 

2

 

 

Student 11

 

 

7

 

 

3

 

 

0

 

 

Student 12

 

 

4

 

 

6

 

 

0

 

 

Student 13

 

 

1

 

 

7

 

 

2

 

 

 

 

Table 1 shows that many of the students achieved borderline passes in a number of the areas. Some students only failed in one area. The system as it stands could mean that a fail in one area is a fail of the OSCE - this is in line with the handbook that states pass or fail as the assessment process. My concern was that it did not seem to matter how many borderlines are obtained, as these constituted a pass. It was not obvious what the point of giving the borderline score was or whether giving a particular number of borderlines would constitute a fail. This is further discussed in the section on recommendations.

 

 

 

 

Reflections on the process

 

  • The majority of students were extremely nervous;
  • The assessment process was taxing and it was difficult to take appropriate time to mark during the time allotted;
  • The drama student was new to the process and made some mistakes (making it easier for some students and harder for others);
  • The majority of candidates responded to what they saw in front of them (an 18-year-old), instead of what they had read on the card (date of birth 12/03/36). This may have influenced their decision-making;
  • Approximately half the students did not follow a consultation structure and therefore did not extract the information they needed. The ones who did follow a structure gained more information;
  • Only two or three students were proficient at performing a chest examination;
  • None of the students were familiar with the formulary used for nurse prescribing;
  • The majority (11 out of 13) students were used to having information at their fingertips by way of a computer. The two who normally worked in patients’ homes were more at ease with asking the patient for information.

 

 

 

Discussion

 

 

The first three points relate to the OSCE as an assessment form and will be discussed later. Many of the findings from the assessment of the OSCE relate to adult learning theory, for example, adults learn better by meaningful and realistic situations and by practising what they have learnt. The students had been given a demonstration on how to examine the chest but were not putting this into practice and therefore the skill was not acquired.

 

 

Jarvis and Gibson (1997) defined learning as ‘the transformation of experience into knowledge, skills, attitudes, emotions, values, belief, senses’. Ibid (Jarvis and Gibson, 1997) argued that this definition embraces the cognitive, affective and psychomotor domains as well as including the emotions. Various taxonomies have been devised and revised in order to assess learning at different levels of the various domains. A commonly used one is that of Bloom et al (1956). This taxonomy identified different areas of learning (cited in Stuart, 2003). In the tables below I have identified the areas in which the OSCE assessment may have tested the classification of educational objectives identified by Bloom.

 

 

 

 

Table 2. OSCE in relation to Bloom’s taxonomy of learning on the cognitive domain

 

 

Bloom et al

 

 

How tested by OSCE?

 

 

Knowledge

 

 

Consultation structure; causes of chest symptoms

 

 

Comprehension

 

 

Compare symptoms with causes

 

 

Application

 

 

Given the information gained, what interventions would be needed?

 

 

Analysis

 

 

What reasons given for arriving at decision?

 

 

Synthesis

 

 

Not tested

 

 

Evaluation

 

 

Of the decisions that could have been made, which one was the most appropriate?

 

 

 

 

Table 3. OSCE in relation to Bloom’s taxonomy effects on psychomotor and affective domains

 

 

Psychomotor

 

 

How tested

 

 

Affective domain

 

 

How tested

 

 

Reflex movements

 

 

Not tested (assumed)

 

 

Receiving

 

 

Listening to what was happening

 

 

Basic movements

 

 

Patterns of response not tested

 

 

Responding

 

 

Responding to patient and history

 

 

Perceptual abilities

 

 

Visual and auditory, listening skills tested

 

 

Valuing

 

 

Patient evaluation and warmth to patient tested

 

 

Physical abilities

 

 

Not tested

 

 

Organising/conceptualising

 

 

Attitude to patient

 

 

Skilled movements

 

 

Chest examination tested if done

 

 

Characterising by value

 

 

Not tested

 

 

Non-discursive communication

 

 

Creative movements N/A

 

 

 

 

 

 

 

 

Bloom did not finish work on psychomotor domain and others have attempted to complete it, according to Atherton (2005). The one used above was put forward by Harrow (1972).

 

 

In the psychomotor arena it is the visual discrimination that comes under perception. In some ways this may have disadvantaged students in that the visual stimulus of seeing the young girl in front of them was probably greater than the ‘fact’ they had read on entering the room when they were feeling stressed. Forgetting the date of birth in this instance was probably due to retroactive inhibition, which occurs when new learning material interferes with what was previously learnt (Quinn, 2000). In this case, this was the presence of a young person in front of them causing them to forget what they had just read.

 

 

With regards to the other areas of learning involving the use of a structured consultation framework, the students had prior warning that this would be tested and therefore were responsible for their own learning in that area. They did not know what the scenario would be and many of them could not demonstrate the chest examination. The students had observed a demonstration and had opportunity to practise in that session, so why had they not acquired the skill?

 

 

Motor skills will be learnt at different levels according to where students are as individuals (Quinn, 2000; Benner, 1984). Simpson (1972) suggested that if a taxonomy of learning is used motor skills can be learnt to the highest level.

 

 

Learning a skill using a taxonomy includes:

 

  • Perception of sensory cues;
  • Set meaning readiness to act;
  • Guided response where the demonstrator would supervise the student;
  • Mechanism includes performance of the skill as habitual;
  • Complex overt response meaning a skilled performance;
  • Adaptation where the skill can be adapted to different circumstances;
  • Origination involves creation of original movement patterns (Simpson, 1972).

 

A demonstration therefore involves teaching and learning at various levels. The learner may be at any of the stages described in the above taxonomy and teaching would need to be adapted to where the learner is. The classroom-taught session gave the basic knowledge required and then it was up to the students to learn the skill in practice with their medical supervisor.

 

 

Possible reasons for not acquiring the skill may be:

 

  • Seen as not relevant to practice;
  • Lack of opportunity;
  • Not wanting to demonstrate ignorance of skill;
  • Laziness;
  • Too busy.

 

Some of these issues have been addressed under course content and are not the subject of this paper.

 

 

 

 

Marking schedule

 

 

The marking schedule was not standardised and may be difficult to standardise (see marking schedules). Three columns are included - pass/borderline/fail, with 10 areas to be assessed in this way. As a new marker I followed the criteria strictly and ticked what were considered to be the appropriate boxes. As such, the majority of the students would fail. It was not clear whether ticking one fail box meant failing the OSCE and whether ticking a majority of borderline boxes would also mean a fail. Following the assessment this was discussed with the course leader and there did not seem to be any real clarity over what constituted a fail. It seemed that the other examiners tick the pass boxes in the majority, perhaps to avoid this uncertainty. Ticking a fail box does indicate a fail but there is not an agreed pass percentage. If so what would ‘borderline’ constitute? In view of the lack of clarity I will discuss ways in which this may be resolved and look at the literature concerning OSCE examinations.

 

 

Other marking schedules include criteria as follows:

 

 

Done/Not Done

 

 

0-5 marking: 0=not done; 1=fail, not competent; 2=pass, not confident; 3=confident pass; 4=good, shows confidence, carries out technique skilfully; 5=excellent, expert confident practitioner.

 

 

Another area that was of some concern was that I felt rushed on the morning of the OSCE assessments. As a result I could not be certain that the marking was absolutely correct. This was discussed with the other examiner who agreed that the process is rather hurried due to the rapid turnover of students from one station to the next. As I was not happy with this I decided to carry out an exercise for my own personal development, which was to re-mark the OSCEs using the videos. The results can be seen in the table below. The results from this led to the recommendations that were later implemented in future OSCE examinations.

 

 

Table 4. OSCE Results from video marking

 

 

Schedule 10 areas

 

 

Pass

 

 

Borderline

 

 

Fail

 

 

Student 1

 

 

5

 

 

3

 

 

2

 

 

Student 2

 

 

3

 

 

6

 

 

1

 

 

Student 3

 

 

3

 

 

4

 

 

3

 

 

Student 4

 

 

4

 

 

3

 

 

3

 

 

Student 5

 

 

7

 

 

2

 

 

1

 

 

Student 6

 

 

6

 

 

3

 

 

1

 

 

Student 7

 

 

9

 

 

1

 

 

0

 

 

Student 8

 

 

10

 

 

0

 

 

0

 

 

Student 9

 

 

7

 

 

2

 

 

1

 

 

Student 10

 

 

7

 

 

2

 

 

1

 

 

Student 11

 

 

9

 

 

1

 

 

0

 

 

Student 12

 

 

4

 

 

6

 

 

0

 

 

Student 13

 

 

6

 

 

2

 

 

2

 

 

 

 

Table 5. Comparisons between video marking and original marking

 

 

 

 

Video Pass

 

 

Original Pass

 

 

Video borderline

 

 

Original borderline

 

 

Video fail

 

 

Original fail

 

 

1

 

 

5

 

 

3

 

 

3

 

 

6

 

 

2

 

 

1

 

 

2

 

 

3

 

 

2

 

 

6

 

 

5

 

 

1

 

 

3

 

 

3

 

 

3

 

 

1

 

 

4

 

 

8

 

 

3

 

 

1

 

 

4

 

 

4

 

 

2

 

 

3

 

 

5

 

 

3

 

 

3

 

 

5

 

 

7

 

 

4

 

 

2

 

 

5

 

 

1

 

 

1

 

 

6

 

 

6

 

 

5

 

 

3

 

 

3

 

 

1

 

 

2

 

 

7

 

 

9

 

 

5

 

 

1

 

 

5

 

 

0

 

 

0

 

 

8

 

 

10

 

 

6

 

 

0

 

 

4

 

 

0

 

 

0

 

 

9

 

 

7

 

 

4

 

 

2

 

 

4

 

 

1

 

 

2

 

 

10

 

 

7

 

 

6

 

 

2

 

 

2

 

 

1

 

 

2

 

 

11

 

 

9

 

 

7

 

 

1

 

 

3

 

 

0

 

 

0

 

 

12

 

 

4

 

 

4

 

 

6

 

 

6

 

 

0

 

 

0

 

 

13

 

 

6

 

 

1

 

 

2

 

 

7

 

 

2

 

 

2

 

 

 

 

Key to table: Students who did better following the video marking in respect of failing criteria have been highlighted in bold. Students who did worse in respect to failing criteria are in italics. Five students fared better following the video marking and two students fared worse. Had this assessment been video marked the discrepancies would not have occurred and with two video markers the increased scrutiny would have added further validity. The video marking was more objective than the initial marking because there was not the same time pressure. In addition, the marker could stop and start if concentration was wavering.

 

 

It might be argued that lack of experience was the reason for the discrepancies but without video marking and second marking no-one would have been aware of the discrepancy. This is a danger of ‘one assessor marking’.

 

 

 

 

Assessment

 

 

Rowntree (1987) suggested that ‘assessment will remain with us from the cradle to the grave’. In nursing and healthcare, assessment is a statutory requirement. Stuart (2003) suggested that ‘valid and reliable assessment will provide objective data upon which assessment decisions are made’. The NMC is reliant on assessment decisions made by higher education institutions (HEI) and clinical practitioners in order to provide registration to a nurse or a midwife. Other professional bodies would have the same reliance on assessments. See Fig 1 for the reasons for assessing nursing and healthcare students.

 

 

Debates about assessment will continue, but while the arguments go on educational institutions and practitioners have a responsibility to carry out the assessments that seem to be the most appropriate for the course delivered. In many instances there is no choice, as in the nurse prescribing course. The form of assessment has already been decided but it is left to the course team to ensure that the content of the assessments set are relevant, appropriate and fair.

 

 

 

 

Recommendations for future OSCE assessments

 

  • Discuss construct validity of the assessments (formative and summative) prior to start of the course with a panel of experienced teachers and course team;
  • Construct validity to be monitored by external examiners to add rigour;
  • Introduce a student self-assessment form to be completed after leaving the station. Nicol and Freeth (1998) found that this helped students reflect on their performance and prompted identification of strengths and weaknesses. I would suggest that in real practice if a nurse had forgotten anything vital she or he would contact the patient. This form could be used to add that extra degree of reality to the assessment and should be used within the marking criteria;
  • Instead of having an assessor present, which increases stress levels for the student, use video assessment to mark the consultation with no assessor in the room. The patient could be given any prompts to show the nurse if she or he performs a task which requires a response;
  • With video marking use first and second marking, as with other forms of assessment, to increase reliability and validity;
  • The drama students who act as patients should be given the scenario and should then test it out with one of the course team before student OSCE. This should highlight any problems and help them to get the story straight;
  • Devise a marking system that has clarity - if the assessment is pass/fail, perhaps use those two indicators on the marking schedule with clear guidelines as to what constitutes a pass and a fail. If 100% is not necessary indicate on the schedule which areas are essential for a pass, for example, regarding patient safety.

 

Introducing these measures may address the concerns raised by Phillips et al (2000), who reported that the form of assessment was seriously flawed, having neither inter or intra-assessor reliability (cited in Stuart, 2003). These changes would improve the validity and reliability of the OSCE assessment while helping to reduce some of the student stress surrounding this form of assessment.

 

 

 

 

Conclusion

 

 

Since carrying out this exercise and as a result of the recommendations, video marking has been introduced for both mock and final OSCE assessments. The video is set up before the start of the OSCE and there is no assessor present in the room. The videos are later marked by two members of the course team, which has improved validity and reliability of the results. Both students and assessors find the process less stressful and it has also reduced the labour intensity of OSCE marking. The other recommendations are being considered and the OSCE examination is being reviewed on an ongoing basis.

 

 

 

 

References

 

 

Anderson, M., Stickley, T. (2002) Finding Reality: the use of objective structured clinical examination (OSCE) in the assessment of mental health nursing students’ interpersonal skills. Nurse Education in Practice; 2: 160-168.

 

 

Benner, P. (1984) From Novice to Expert: Excellence and Power in Clinical Nursing Practice. London: Addison Wesley.

 

 

Bloom, B. et al (1956) Taxonomy of Educational Objectives. Cited in Stuart, C. (2003) Assessment, Supervision and Support in Clinical Practice: A Guide for Nurses, Midwives and Other Health Professionals. London: Churchill Livingstone.

 

 

Brookes, D., Smith, A. (2007) Non-medical Prescribing in Health Care Practice: A Toolkit for Students and Practitioners. Basingstoke: Palgrave Macmillan.

 

 

Courtenay, M., Griffiths, M. (2007) Implementation of Independent andSupplementary Prescribing in the United Kingdom. In: Brookes, D., Smith, A. (2007) Non-medical Prescribing in Health Care Practice: A Toolkit for Students and Practitioners. Basingstoke: Palgrave Macmillan.

 

 

Department of Health (2006) Improving Patients’ Access to Medicines: A Guide to Implementing Nurse and Pharmacist Independent Prescribing with the NHS in England. London: DH.

 

 

Department of Health (1999) Review of Prescribing, Supply and Administration of Medicines. Crown Report 2. London: DH.

 

 

Department of Health (1989) Report of the Advisory Group on Nurse Prescribing. Crown Report. London: DH.

 

 

Department of Health and Social Security (1986) Neighbourhood Nursing: a focus for care. Cumberlege Report. London: HMSO.

 

 

English National Board for Nursing, Midwifery and Health Visiting (1996) Regulations and Guidelines for the Approval of Institutions and Courses. London: ENB.

 

 

Harden, R., Gleeson, F. (1979) Assessment of Clinical Competence using Objective Structured Clinical Examination. Cited in Nicol, M., Freeth, D. (1998) Assessment of Clinical Skills: a new approach to an old problem. Nurse Education Today; 18: 601-609.

 

 

Harrow, A. (1972) A Taxonomy of the Psychomotor Domain. New York: McKay.

 

 

Atherton J, (2005) Bloom’s Taxonomyhttp://www.learningandteaching.info/learning/bloomtax.htm

 

 

Jarvis, P., Gibson,S. (1997) The Teacher, Practitioner and Mentor in Nursing, Midwifery, Health Visiting and the Social Services. (2nd Ed). Cheltenham: Nelson Thornes.

 

 

Nayer, M. (1993) An Overview of the Objective Structured Clinical Examination. Physiotherapy Canada; 45: 3, 171-178.

 

 

Nicol, M., Freeth, D. (1998) Assessment of Clinical Skills: a new approach to an old problem. Nurse Education Today; 18: 601-609.

 

 

Nursing and Midwifery Council (2006) Standards of Proficiency for Nurse and Midwife Prescribers. London: NMC.

 

 

Phillips, T. et al (2000) Practice and Assessment in Nursing and Midwifery: Doing it for real. Cited in. Stuart, C. (2003) Assessment,Supervision and Support in Clinical Practice: A Guide for Nurses, Midwives and Other Health Professionals. London: Churchill Livingstone.

 

 

Quinn, F. (2000) Principles and Practice of Nurse Education (4th Ed). Cheltenham: Nelson Thornes.

 

 

Ross, M. et al (1988) Using the OSCE to measure clinical skills performance in nursing. Journal of Advanced Nursing; 13: 45-46.

 

 

Rowntree, D. (1987) Assessing Students: How Shall We Know Them? (2nd Ed). London: Kogan Page.

 

 

Simpson, E. (1972) The Classification of Educational Objectives in the Psychomotor Domain. Washington DC: Gryphon House.

 

 

Stuart, C. (2003) Assessment, Supervision and Support in Clinical Practice: A Guide for Nurses, Midwives and Other Health Professionals. London: Churchill Livingstone.

 

 

United Kingdom Central Council for Nursing, Midwifery and Health Visiting (1999) Fitness For Practice. London: UKCC.

 

 

 

 

Have your say

You must sign in to make a comment.

Related Jobs

Sign in to see the latest jobs relevant to you!

newsletterpromo