Introduction

Selection into and training in medical specialities in the UK underwent a great change in 2007.1 This Modernising Medical Careers (MMC) programme introduced structured, supervised training. Also, the intention was that a newly qualified doctor would undergo 2 years of foundation training (F1 and F2) and then enter into a single run-through grade (Speciality Trainee, ST), thus having only one point of competitive entry into a speciality, instead of two present in the old system of senior house officers (SHOs) and specialist registrars.

The driver for MMC was the document by the Chief Medical Officer, which highlighted that SHO training in the UK had remained unchanged for many decades, despite the Calman reforms of the registrar grade in the 1990s.2

Before the European Working Time Directive and the introduction of the Calman system, it has been estimated that surgical trainees in the UK worked around 30 000 h before becoming a consultant. It is now estimated that the new reforms in training would reduce this amount to 6000 h.2, 3 As a result, it has been suggested that a greater emphasis should be placed on selecting the best candidates through more robust methods.4 This is in order to maximise the educational potential of training programmes and minimise the dropout rate.

The MMC programme attracted several criticisms that led the Department of Health to commission an independent review led by Professor Tooke.5 The review criticised the weaknesses of electronic application system (Medical Training Application Service) and the way in which the application forms were developed. In particular, hypothetical questions about clinical practice were weighted more heavily than verifiable, relevant achievements. The selection process changed in 2008, with selection being led locally through the use of structured (CV based) application forms, which were speciality specific.6 In essence, these were the same principles that MMC had hoped for: structured application forms and structured interviews.

The interim Tooke report7 revealed that ‘93% of the profession refutes the selection processes used, and supports greater weight being placed on undergraduate academic achievement, postgraduate academic achievement and experience obtained in the particular speciality applied for.’ The MMC team may have placed less emphasis on experience and postgraduate achievements for a particular reason. In 2007, in some specialities, F2 doctors were in direct competition for ST posts with more experienced candidates, and the former would have been at a disadvantage if these factors were taken into account.

In the past, many specialities, especially surgery, have widely relied on using cognitive factors in the selection of trainees, for example, by using scores from national examinations.8 In ophthalmology, academic achievements have had an important role. In the UK, possession of the part 1 MRCOphth examination in ophthalmic basic sciences had effectively been a prerequisite for candidates to be interviewed for an SHO post.9 However, this is no longer a requirement for entry into ophthalmology training at ST1.10 Applicants can choose to sit for the new equivalent, FRCOphth Part 1,11 which combines elements of the old part 1 and theoretical optics and pathology. This examination is more clinically oriented. As such, candidates who are already in speciality training are expected to pass this examination before entering into the third year of ophthalmic specialist training.

It is possible that other relevant academic achievements, such as undergraduate prizes, may demonstrate commitment to a certain speciality and have a role in selection. In the UK, the only current national speciality-specific prize examination is in ophthalmology. The Duke–Elder undergraduate prize examination takes place once a year in medical schools throughout the UK and Ireland.12 This has been running for over 30 years and not only examines knowledge on the basis of clinical ophthalmology but also ocular physiology, anatomy, pathology, and genetics. It is intended for medical students who have completed their ophthalmology training but is open to all medical students, provided that they have not graduated.

The utility of an examination, or any educational assessment, can be evaluated in a number of ways.13, 14 The designers of a formative assessment would be concerned about its educational impact or effectiveness. The Duke–Elder examination should be regarded as a summative assessment; however, its ultimate aim is to encourage medical students into a career in ophthalmology. How successful it has been in achieving this aim remains to be determined. This paper aims to determine the proportion of candidates in the top 20 list of the Duke–Elder examination who went on to pursue a career in ophthalmology.

Methods

A retrospective analysis of the top 20 candidates in the Duke–Elder exam from 1989 to 2005 (except 1995) was carried out to determine which candidates subsequently entered the Royal College of Ophthalmologists (RCOphth) Specialist Training Register or the General Medical Council (GMC) Specialist Register for Ophthalmology.

Duke–Elder top 20 candidates

For the purposes of this project, success in the Duke–Elder exam was defined as being ranked amongst the top 20 candidates as these are the lists that are published by the RCOphth. These lists do not necessarily contain exactly 20 candidates because if there are joint ranks at the 20th rank, these candidates are all included. The Examinations department at the RCOphth provided lists of the top 20 ranked candidates in the Duke–Elder undergraduate ophthalmology prize from 1989 to 2005. The list from 1995 could not be obtained. The lists contained the candidates’ names, ranking, and medical schools.

Entry into specialist training or specialist register

The GMC Register was used to obtain the GMC number of the candidates, to confirm the medical school that the candidates graduated from and the year they were awarded their medical degree. Although the exact date the degree was awarded is not stated on the register, this was estimated from the date provisional registration status was obtained, which is 15 July for most UK medical schools, except for Cambridge, which is December. The GMC Register also provided details of entry into the Specialist Register for Ophthalmology (which allows these doctors to practice ophthalmology as consultants).

Entry into the Specialist Register for Ophthalmology and details of higher specialist training (HST) in ophthalmology were obtained from the Training and Education department at the RCOphth. An electronic database was available from 1995 onwards. Before this, minutes of the Training Committee meetings were used to obtain details of individuals entering registrar training in ophthalmology. A list of SHOs was not available.

Ethical approval

Although some of the data being analysed in the project was available in the public domain (eg, GMC Register and Duke–Elder candidate rankings), ethical approval was sought from the RCOphth. The Examinations and Education Committees were approached as access to the Specialist Training Register was needed (ie, Registrars and new ST posts), in addition to the examination data.

Data analysis

Data were entered into the Microsoft Excel 2007 spreadsheet package and analysed using SPSS v 12.0 for Windows (SPSS Inc., Chicago, IL, USA).

A descriptive analysis of the frequency of top 20 candidates from each medical school took place. Any statistical test on this data would be inappropriate as the frequency may be dependent on the size of the medical school (ie, larger medical schools have a greater pool of potential candidates that could enter the exam).

A statistical analysis was more appropriate for the average ranking score of each medical school. The Kruskal–Wallis statistical test was used to test the null hypothesis that the median ranking scores between the medical schools were not different. A separate analysis was undertaken after excluding the Irish medical schools as they were not part of the UK system. If the P-value was significant a post hoc test would be performed to determine which groups differ from which other groups. The Kruskal–Wallis test is a non-parametric test to compare three or more unpaired groups (ie, medical schools). Although there were a number of candidates that had taken the examination a number of times, these were regarded as ‘independent events’, thus satisfying the criteria for the test.

For the candidates who subsequently entered specialist training in ophthalmology, a statistical analysis also took place to determine whether there was a correlation between the ranking position and the time taken for candidates to enter training. For candidates who had taken the examination several times, their highest score was taken as this would be most relevant during selection into training. An average of the ranks could also have been taken, so that each candidate had a single data point,15 but the highest rank was deemed most appropriate. This fulfilled the criteria for calculating the Spearman's rank correlation coefficient (rs), which requires data points to be independent.

Results

From 1989 to 2005 (excluding 1995), there were a total of 343 ‘top 20’ Duke–Elder candidates. The estimated total number of UK medical students that were eligible for this examination during this time was 70 000 (an approximate number on the basis of medical school admissions only 16).

A total of 23 of these candidates had appeared in the top 20 on two occasions and 4 appeared on three occasions. Thus, there were 312 unique top 20 candidates over this period.

The frequency of top 20 candidates from each medical school during this period is displayed in Table 1. The medical schools with the most number of top 20 candidates included 56 candidates from Irish medical schools, 41 from Queen's University (Belfast), 29 from Manchester, 28 from Guy's King's and St Thomas’, 28 from University College London. The medical schools with the least number of top 20 candidates included Leeds (1), Liverpool (2), Southampton (3), Dundee (3), and Aberdeen (4).

Table 1 Frequency of top 20 candidates from each medical school from 1989 to 2005 (excluding 1995)

The mean ranking of candidates from each medical school is summarised in Table 2. There was no significant evidence to suggest that the overall medians between the medical schools differed (P=0.23; Kruskal–Wallis test) even after the Irish medical schools were excluded from the analysis (P=0.19; Kruskal–Wallis test).

Table 2 Mean Duke–Elder exam ranking score of candidates from each medical school from 1989 to 2005 (excluding 1995)

Out of the 312 unique top 20 candidates, 92 (29.5%) became ophthalmology Specialist Trainees or were registered with consultant status in the UK. In candidates that were in the top 20 ranking more than once (n=27), 56% (15/27) had achieved this training/consultant status. The median time to enter HST was 60.5 months in those who appeared once in the top 20 ranking compared with 57 months for those who appeared more than once, which was not significantly different between these two groups (P=0.36; Mann–Whitney test). There was no correlation between the ranking position and the time taken to enter specialist training after graduation (rs=0.01). It is also important to know how this achievement of a top 20 ranking is represented amongst trainees that have been selected into ophthalmology (ie, what proportion of trainees actually had this academic achievement). Apart from using survey methods, this is difficult to determine. However, a crude estimate of this proportion would be to take the total number ophthalmic of HST trainees during this period, which was 790. Thus, it is estimated that 12% (72/790) of trainees selected into ophthalmology may have achieved a top 20 ranking in the Duke–Elder exam.

Discussion

The primary finding from the results of this project are that a large proportion (29.5%) of the top 20 Duke–Elder undergraduate prize examination candidates ultimately ended up pursuing a career in ophthalmology, however, the ranking position had no correlation with the time taken to enter postgraduate specialist training in ophthalmology. It was estimated that 12% of ophthalmic trainees selected into HST may have had this top 20 ranking.

The methodology used in this project makes it difficult to conclude whether the examination has been effective in its aims for several reasons. First, a more appropriate comparison of groups would have been between candidates ranking in the top 20 vs those that did not. A method to determine this would involve analysing the ophthalmology SpR/ST database over a comparable period. The names of these individuals could be searched through the examination's complete candidate results list (not just the top 20). The proportion of specialist trainees ranking in the top 20 of the exam, vs the proportion ranking outside it, could thus be determined and would have been a more useful analysis. However, the full Duke–Elder candidate results list was only available since 2006. This issue could potentially be overcome by conducting a survey of UK ophthalmology trainees to determine whether they ranked in the top 20 or not. This methodology would, however, be limited by the response rate and possibly recall bias.

Second, the candidates who pursued a career in ophthalmology are likely to have been highly motivated individuals and their other academic, clinical, and extracurricular achievements in ophthalmology could have had a greater role in their selection. Indeed, an observation of the 2008 shortlisting criteria for ST1 ophthalmology in the London Deanery revealed that only a maximum of 4 points out of 38 (prizes and commitment to ophthalmology) could be attributed to any individual who possessed the prestigious top 20 ranking.17

This relative ‘unimportance’ placed on the Duke–Elder examination during the shortlisting stage may dissuade potential candidates in the future from taking the examination. However, a number of points should be noted. The interim Tooke7 report highlighted that it has become increasingly difficult to shortlist candidates as applicants can have very similar achievements on their application form. Academic achievements have been used widely in the US training programmes to distinguish between candidates in highly competitive specialities and provide a more verifiable way of doing so.8, 18 Third, a phenomenon known as the ‘halo’ effect may exist in the scoring of applications. It is a well-known phenomenon in the personnel selection psychology literature, where a conclusion is reached about a job application within the first half minute of the encounter.19 This effect has been observed when USMLE (United States Medical Licensing Examination) part 1 scores were given to interviewers, which resulted in a significantly higher interview score being allocated to high USMLE score applicants.20 It is possible that such an effect could potentially occur at the shortlisting stage too, as raters often come across a candidate's qualifications and academic achievements before other sections of the application form. However, during the MMC 2007 recruitment round 1a, in some deaneries, parts of the application form were separated from each other,7 which would have prevented such a halo effect.

The value of using cognitive factors in the selection of trainees has been investigated by several studies. These have focused on determining which factors used in selection process best predict future performance in the training programme. In the USA, poor correlations between basic science examination scores (eg, step 1 USMLE) and surgical performance ratings have been found for general surgery trainees21 and orthopaedic trainees.22 Conflicting with these are findings from the field of internal medicine, where step 1 scores of 123 physician trainees had a significant correlation with overall training programme evaluations.23 In obstetrics and gynaecology, step 1 scores have also had a significant correlation with in-training assessments.24

On balance, the evidence regarding the value of using cognitive factors, for example, academic achievement, in selecting trainees maybe conflicting. However, the biomedical knowledge that is tested in such examinations may have a key role in developing expert performance as a doctor. Initially, studies of medical expertise suggested that expert reasoning can be seemingly independent of biomedical knowledge.25 For example, pattern recognition and other forms of non-analytic reasoning can lead to accurate clinical decisions with little-to-no biomedical knowledge.26 In a series of studies that asked clinicians to think aloud while working through a clinical case, a qualitative analysis of the verbal reports revealed little mention of biomedical concepts.27, 28 Only when confronted with a diagnostic challenge did experienced clinicians begin to explicitly rely on biomedical principles.

More recently, there has been evidence to suggest that biomedical knowledge may have a subtle, yet important, role in the development of medical expertise.29 This indirect role for biomedical knowledge forms the basis of Schmidt's encapsulation theory.30 According to this theory, biomedical knowledge and clinical facts become increasingly integrated as the clinician gains experience. For the medical expert, biomedical concepts become encapsulated under clinical facts in the mental representation of a disease. With time, clinicians can seamlessly recognise a group of clinical facts linked by biomedical knowledge without needing to overtly describe the underlying pathophysiology. This encapsulation of biomedical knowledge explains why there is little mention of basic science principles or mechanisms in explicit recall or reasoning measures. Training medical students with underlying mechanisms of disease related to basic biomedical sciences can help retain greater diagnostic accuracy in these students.31, 32 In another study, psychology students were divided into two groups to learn the clinical features of a series of endocrine diseases.33 The group that had learning materials that provided explanations for how the features were linked were able to diagnose more accurately when asked to move more quickly through the diagnostic challenges. This pattern of performance is similar to that seen in experts performing a well-learned skill,34 which suggests that the causal mechanisms allowed the novices to function more like experts.

Such evidence reinforces the need to continue to teach biomedical knowledge at the undergraduate level. Any phenomenon, including the Duke–Elder examinations, that stimulates the undergraduate student's interest in ophthalmology is welcome. However, whether the Duke–Elder examination had any impact on the future postgraduate clinical performance of candidates in ophthalmology remains to be determined.

The interest in ophthalmology the Duke–Elder examination intends to generate amongst students is supported by its non-restrictive entrance policy: students do not need to have undertaken an attachment in ophthalmology and can take the examination more than once. The latter is a situation that is evident even amongst the top 20 candidates, some of which appeared in the rankings more than once.

This study found no significant difference in the median ranking score achieved by candidates from different medical schools, although the frequency of top 20 candidates from each medical school varied between the schools, a likely function of the size of the medical school. From this data, it is difficult to conclude whether the standard of undergraduate ophthalmology education is similar across the medical schools as there is an inherent sampling bias in the results. First, not all students across the UK entered this examination. Second, the top 20 candidates are the elite and it is likely that their knowledge came from self study rather than the basic undergraduate curriculum that is taught across many medical schools.35, 36 However, these candidates may have been motivated to enter the Duke–Elder examination by how much exposure to ophthalmology they obtained in their medical schools. Future studies could investigate whether there is a true difference in the quality and quantity of ophthalmology training in the undergraduate medical curriculum across the UK.

Overall, the findings of this study suggest that a large proportion of top 20 Duke–Elder examination candidates pursue a career in ophthalmology. It is difficult to determine the importance of the Duke–Elder examination in the selection of specialist trainees to date. Future research could survey the shortlisting criteria across the UK, as well as determine whether a ‘halo effect’ does exist in the scoring of training applications. The value of cognitive factors, such as academic achievement in basic clinical sciences, in the selection of trainees is conflicting; however, there is evidence to suggest its role in the development of medical expertise. This has important implications for how much and what is taught in the undergraduate medical curriculum as a whole.