Skip to content. Skip to navigation

Faculty Development

You are here: Home / Structured Assessments of Clinical Competence / Station development

Station development

It is important to write the station material well in advance of the examination date so that the stations can be reviewed and tried out prior to the actual assessment. Station material should include:

  • Construct: a statement of what that station is supposedly testing, e.g. this station tests the candidate’s ability to examine the peripheral vascular system
  • Clear instructions for the candidate: to inform the candidate exactly what task he/she should perform in that station
  • Clear instructions for the examiners: to help the examiner at that station to understand his/her role and conduct the station properly. include a copy of the candidate instructions
  •  List of equipment required
  • Personnel requirements: Whether the station requires a real patient or a simulated patient and the details of such individuals (age, gender, ethnicity)
  • Simulated patient scenario: if the station requires a particular role to be played
  • Marking schedule: this should include the important aspects of the skill being tested, a scoring scheme for each item and how long the station should last.  The marking schedule can be either in a checklist format or a rating scale (Figure 2).  Items can be grouped into three broad categories: process skills (e.g rapport, questioning and listening) content skills (e.g. appropriate medical or technical steps or aspects of the task or skill being tested) or management skills (appropriate set questions in specific relation to the case).

Figure 15.2  Global ratings vs checklist scores

When OSCEs were first introduced extensive detailed checklists of each step of a clinical task were produced for each station. Checklists often focused on easily measured aspects of the clinical encounter and the more subtle but critical factors in clinical performance were overlooked or ignored.

The use of rating scales to assess the performance of clinical skills has been shown to be reliable when used by expert examiners (Cohen et al. 1991). Examiner training can improve their reliability further (Hodges 2003).

It is more effective to use checklists to assess technical skills in the earlier stages of learning (at the ‘novice’ end of the learning trajectory) and to use rating scales to assess more complex skills, especially with increasing levels of professional competence (Arnold 2002;Hodges et al. 2002).

 

Thinking point:
How are patients used in the clinical assessments in which you have been involved?
Think about:
(a) Real patients
(b) Standardised patients
(c) Simulated patients
(d) Simulations

How might each of the various types of patient be used most appropriately in clinical assessments?
What are some of the advantages and disadvantages of involving each of the types of patient?

Print module to PDF

Save a PDF of this module, so you can print it and read it in your own time.

Email your comments

Let us know what you think about this module or give us your feedback.

Further information

More information about this module, further reading and a complete list of glossary terms.

Learning activities

Read about the recommended learning activities for this module.