The teacher’s role
Although many organisations have central or departmental quality units or groups with a responsibility for ensuring that data is gathered, reports are submitted and reviews are carried out, the individual teacher plays a key role in assuring and improving quality. The professionalisation of medical education places increasing scrutiny on and expectation from all teachers. Review exercises are based on an expectation that individual teachers build in quality assurance and evaluation processes to their own teaching practice. In particular they are concerned with the extent to which learning outcomes are aligned with teaching and learning methods, assessment methods and evaluation (Biggs, 1996), and the ways in which teachers act on misalignment in a continual review cycle.
Learners are at the heart of any educational review cycle. Gathering information about learners – their levels of satisfaction, engagement in learning and achievement of agreed learning outcomes or objectives – is the foundation of all quality systems. These data provide the best information upon which a clinical teacher might base her or her reflections on the need for and means of improving the alignment of outcomes, methods and assessment.
A good teacher needs to be in a position to gather information and to respond to it. In addition he/she needs to maintain the sort of records that will allow him/her to assure organisations and external agencies that he/she is gathering robust information and that the information is used to constantly scrutinise his/her assumptions about student learning.
Gathering and using feedback
One of the key elements of any quality assurance system is ensuring that the data are collated efficiently into a form which can be analysed and that they are presented appropriately. In order to achieve this we need to consider the most appropriate sorts of data.
Hounsell (2009) suggests that data should be gathered from a range of sources including:
- Feedback from students
- Self-generated feedback, e.g. gathered from (for example) audio/video observation
- Feedback from colleagues, e.g. peer evaluation
- Incidental feedback, e.g. attendance patterns, take-up of options, attentiveness.
How do you or colleagues gather data from different sources to improve teaching and the learning experience?
Gathering evaluative data from students plays an important role in tracking student satisfaction and engagement over time and can be effective at course, programme and institutional level. The majority of clinical teachers will be involved in formally evaluating learning and will constantly gauge the progress learners are making and adjust their approaches to enhance this. For example, skilfully asking questions of learners to ascertain how much they have learned or areas of confusion gives teachers valuable data which can be used to review and amend approaches to teaching or lesson content (see the Facilitating learning in the workplace module). This reflective approach is a hallmark of excellent teaching which does not lend itself to formal systematisation other than through peer review and reflective portfolios maintained for professional development purposes.
In addition, other routinely held institutional data on assessment performance, admissions information or graduate employment may provide useful feedback on the quality and relevance of education.
The quality assurance ‘tool’ most commonly used at classroom level is the student feedback questionnaire. This typically considers either student satisfaction or learner engagement with the learning process. Student engagement questionnaires aim to ascertain the extent to which learning activities stimulate students to become engaged in educationally purposeful activities. Much work has been done by international agencies and consortia in distilling information. For example, surveys developed in the US (the National Survey of Student Engagement) and Australasia (the Australasian Survey of Student Engagement) provide information about the level of student engagement prompted by teaching.
Satisfaction surveys are typically designed to gather data from learners about courses or teachers. Questionnaires designed to gather both quantitative and qualitative data tend to be the most common method of gathering this information. Student or trainee satisfaction data provides information to teachers and institutions about the way students feel about the learning processes in which they are participating. Such systems may be applied nationally. In the UK, the National Student Survey (http://www.thestudentsurvey.com) gathers information about all students in higher education via an online survey of all final year undergraduate students. For medicine and dentistry the General Medical and Dental Councils carry out national training surveys of all doctors and dentists in training http://www.gmc-uk.org/education/surveys.asp. The GDC includes vocational dental practitioners as well as foundation dentists; see http://www.bda.org/dentists/policy-campaigns/research/workforce-finance/students-young/vdp-survey.aspx.
These surveys increasingly ask students and trainees about their experiences and views about the clinical environment and the care provided as well as the training/learning experiences. This information is then used to feed back and share with training providers as well as other monitoring bodies such as the Care Quality Commission (CQC, http://www.cqc.org.uk). This much more joined-up approach has been set in place partly in response to recommendations made in reports such as the Francis Report (2013), to help gather more information about care provided.
Learners commonly complain that nothing seems to change as a result of their feedback. This leads students to become resistant to exercises designed to elicit feedback, providing misleading data or refusing to participate. Teachers need to demonstrate and explain what is going to change (and also what is not going to change) as a result of feedback. Satisfaction feedback is often criticised by those who believe that it emphasises the wrong things if learning is the goal: a learner who is ‘satisfied’ or ‘happy’ still may not be learning.
Another criticism of feedback questionnaires is that they tend, because of the complexity of their design and administration, to be used at the end of the course in a form of ‘summative’ evaluation. Teachers cannot use the data to modify their approach to teaching or the emphasis of the course to benefit the learners directly, instead the benefit tends to be for future learners. Students can become disenchanted with such systems as it is very hard to demonstrate that their feedback results in improvements. Also, because much of the data gathered from these exercises is quantitative, relatively large sample sizes are required to draw meaningful conclusions. For a reflective educator working with small cohorts of learners or seeking to make improvements on a day-to-day basis, this type of survey has reduced efficacy.
Many other techniques can be used to systematically gather learner feedback so as to make timely changes to teaching. Examples include Small Group Instructional Diagnosis – a facilitated small group discussion to provide feedback from learners to the teacher (Floren, 2002) – and the ’one-minute paper’ (Angelo and Cross, 1993) which can provide teachers with timely post-session information about what is working or not working for students.