Article Text

PDF

PTU-007 Evaluating Endoscopy Trainers; how Reliable Are Peer Evaluators?
  1. L Macdougall1,
  2. S Corbett1,
  3. M Welfare1,
  4. C Wells2,
  5. J R Barton1
  1. 1Northumbria Healthcare Trust, Newcastle-upon-tyne
  2. 2North Tees Hospital, Stockton-upon-Tees, UK

Abstract

Introduction The training of future endoscopists is important to ensure the ongoing provision of a safe endoscopy service within the UK; however endoscopy training is of a variable standard. Peer evaluation can be used to improve teaching but this does not currently routinely occur in local endoscopy units. We therefore wanted to assess the reliability of peer evaluations using an evaluation tool currently being developed to gain both trainee and peer evaluations.

Methods The DOTS tool has been developed using the list of attributes described by Wells1. In order to gain an assessment of reliability the tool was trialled on JAG approved Training the Trainer courses. Courses from November to March 2012 were contacted and asked to participate. Each course attendant was then sent an information letter and consent form. On day two of the course participants were asked to complete a copy of the DOTS for each of the training episodes they observed.

Data was analysed using SPSS 14; mean score and Cronbach alpha were calculated. Reliability was calculated using Generalisability theory; an initial analysis was performed using only trainers, peers and trainer: peer interaction as facets. A further analysis was then conducted including all possible sources of variance.

Results Eight of the ten courses contacted agreed to participate; all course participants consented to the study. 189 evaluations were collected; these were completed by 58 different peers; 45 trainers were evaluated receiving from one to ten evaluations each. Mean total evaluation score was 63.3 (out of 85); standard deviation 8.6. The tool showed a high level of internal consistency with a Cronbach alpha of 0.895. In the initial analysis 44% of the variance of scores was explained by the difference in trainers’ ability to teach, 35% due to peer variance and 21% by peer:trainer interaction. The G-coefficient for one rater was 0.44 and three raters were required for a G-coefficient of 0.7. When the analysis was repeated the effect of course accounted for 20% of the variance in scores. Reliability was much lower with a G coefficient of 0.28 for one rater.

Conclusion The DOTS tool showed a high level of internal consistency. On initial analysis only three peer reviewers were required to gain acceptable levels of reliability. However on reanalysis the effect of course was responsible for the 20% of the variance and if results were generalised across course then the tool showed poor reliability. The effect of course was unexpected and needs to be investigated further; the tool also needs to be trialled within local units.

Disclosure of Interest None Declared

Reference

  1. Wells, C., The characteristics of an excellent endoscopy trainer. Frontline Gastroenterology, 2010. 1: p. 13–18.

Statistics from Altmetric.com

Request permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.