Introduction

The Overseas Registration Examination (ORE) is a means by which dentists who have qualified outside of the European Economic Area (EEA) may gain entry to the UK Dentists Register maintained by the General Dental Council (GDC). The ORE is the latest iteration of this statutory examination, replacing its predecessor, the International Qualifying Examination, in 2007.

The ORE consists of two parts. Part 1 covers knowledge and applied knowledge and consists of two Multiple Choice Question (MCQ) papers; it must be passed before a candidate is allowed to take Part 2. Part 2 consists of four standalone clinical or practical components that require candidates to demonstrate their clinical skills (see Fig. 1).

Figure 1
figure 1

Miller's triangle applied to the ORE

Further information on the structure of the examination can be located at

http://www.gdc-uk.org/Dentalprofessionals/ORE/Pages/default.aspx

There are two diets (occasions on which the examination is held) of Part 1 per year, each with a capacity of 200 candidates. Six diets of Part 2 are held per year, each diet having a capacity of 100 candidates. The number of candidates passing the examination varies from diet to diet. Between 2012 and 2015 the average pass rate for Part 1 was 57.25% of candidates; over the same period the average pass rate for Part 2 was 30.5%. At the time of writing there are approximately 1,300 dentists who have joined the UK Dentists Register via the ORE route. This paper describes some key features of the ORE in order to enhance understanding of this examination within the dental community.

The Dentists Act 19841 sets out the statutory basis for the Overseas Registration Examination. It states, in relation to holders of overseas diplomas (that is, those holding primary dental qualifications from outside the EEA), that the GDC 'shall, for the purpose of satisfying themselves that a person has the requisite knowledge and skill as mentioned in section 15(4)(c) above, and in addition to such other requirements as they may impose on him, require them to sit for examinations held by a dental authority, or a group of dental authorities, under arrangements made by the Council.'

There are some key phrases in the above clause that merit further exploration.

Firstly, for the purposes of the examination, 'requisite knowledge and skill' is taken to mean that level and range of knowledge and skill (and, importantly, equivalent professional values and behaviour) that might be expected of a UK dental graduate at the point of first registration. The knowledge, skills and values/behaviours required of a UK graduate are set out in Preparing for practice: Dental team learning outcomes for registration (PfP),2 which recently replaced the previous guidance The first five years.3 These two documents provide the learning outcomes that are assessed in the ORE, as will be described later in this paper.

Secondly, the examination must be provided by 'a dental authority, or a group of dental authorities'. The GDC is not a dental authority and cannot conduct the examination itself. Parts 1 and 2 are therefore delivered by suppliers holding a contract with the GDC, to an assessment strategy and specification designed and validated by the GDC.

Thirdly, 'arrangements made by the Council' refers to a range of responsibilities, such as:

  • Specifying the standard and outline form of the examination

  • Contracting for suppliers

  • Enrolling candidates

  • Quality assurance of all aspects of the examination.

Fundamentally, the examination exists to protect the public, by ensuring that overseas dentists, whose training lies beyond the knowledge and influence of the GDC, meet certain minimum standards of competence. The standard for each of the papers and clinical assessments is set at the level expected of a recently graduated Bachelor of Dental Surgery (BDS/BChD) student. The examination is not used to limit, or control the numbers of overseas dentists entering the UK register. The examination is also, clearly, a 'high-stakes' endeavour for candidates, who often invest considerable time and resources in attempting it, and whose futures can often depend critically upon passing. For these reasons it is essential that all elements of the design and delivery of the ORE reflect best practice and are sufficiently robust to make and defend these high stakes decisions about candidates' fitness for registration in the UK.

Quality by design

Designing a high quality, robust examination requires consideration of several factors. These have been captured and summarised in an assessment utility equation that proffers that the usefulness of any assessment is a product of:

Utility = reliability x validity x educational impact x acceptability x cost4 x (the later addition of) feasibility5

It is beyond the scope of this paper to cover in detail how the ORE ensures appropriate coverage of all aspects of the utility equation. In terms of validity, however, it is essential that the examination elicits evidence from candidates that allows credible decisions to be made about their fitness for registration. In other words, the examination must be fit for purpose. The design of the ORE attempts to address this fundamental requirement by:

  • Using assessment methods that are aligned with the outcomes being assessed (ensuring that the evidence arising from the assessment supports justifiable decisions about attainment of the outcomes)

  • Widely sampling the learning outcomes (ensuring broad coverage of the curriculum)

  • Setting the standard to be attained in order to pass the examination, using accepted methodology (ensuring only those demonstrating the requisite degree of knowledge and skill are successful)

  • Standardising the examination for all candidates at all sittings (ensuring fairness).

Quality assurance mechanisms are employed so that the above objectives are achieved at each diet. The quality assurance process involves a continuous review of feedback on the performance of the examination, and its constituent parts, for the purpose of ensuring that the examination maintains appropriate standards over time.

Alignment of assessment methods

Figure 1 shows how Miller's triangle,6 frequently used to illustrate this concept,7 can be applied to the ORE. Learning outcomes that are cognitive in nature (knowledge and the application of knowledge) can be assessed using written tests such as those employed in Part 1 of the ORE. Outcomes that relate to the performance of a skill must be assessed by methods that will allow observation of that performance. For example, it is not possible to infer from the answers to a series of multiple choice questions that a candidate can prepare a tooth for a metal-ceramic crown; those answers would only allow us to infer that the candidate knew how to do this. To be sure that the candidate could actually prepare the tooth they would need to be observed doing it (which is the essence of the Dental Manikin component of Part 2). Only then could the inference about clinical proficiency be made from the assessment and be considered valid. All of the assessment methods employed in Part 2 of the ORE require the practise and observation of a clinical performance by the candidate.

The GDC provides guidance to the suppliers of the examination on how it considers learning outcomes to be aligned to the various components of the ORE, in the form of a generic blueprint. At the time of writing the ORE is in a period of transition with respect to the learning outcomes assessed, such that Part 1 is blueprinted against specified outcomes in Preparing for Practice,2 whereas Part 2 is blueprinted against the outcomes in The First Five Years.3 The new contract for the supply of Part 2, which will commence in 2017, will require assessment against PfP outcomes. Table 1 is an excerpt from a generic blueprint in which PfP outcomes have been mapped against the entire examination.

Table 1 Extract from the generic ORE Blueprint – Examination components mapped against Preparing for Practice outcomes. 'Y' indicates that the GDC considers that the outcome in the left-hand column could be assessed in this component of the examination

Two further comments can be added on the alignment of assessment methods. The first is that no single assessment method is likely to be 'perfect' in terms of possessing all the qualities of good assessment in equal abundance. The use of a number of different assessment methods in a 'scheme' of assessment (or an examination with multiple components) is intended to balance strengths and weaknesses, producing overall trustworthy outcomes.8 The second is that traditional examinations cannot capture evidence relating to the 'Does' domain of Miller's triangle. In this respect the ORE is like many other entry level assessments of professional practice, in that it reveals whether a candidate can perform requisite skills to an appropriate standard in the context of an examination, but cannot guarantee replication of those skills and standards in the workplace once that individual becomes a registered practitioner. For that reason many professions now have a strong emphasis both on continuing professional development and on monitoring standards of professional practice throughout the careers of registered practitioners.

Wide sampling of learning outcomes

The validity of the ORE depends fundamentally upon wide sampling of the learning outcomes that define the scope of the examination. The need for wide coverage of learning outcomes also influences the choice of assessment instruments; for instance, a multiple choice question paper can sample more learning outcomes than an essay paper of equal duration. A significant number of good quality items is required to make reliable estimates of each candidates' depth and breadth of clinical knowledge.

The sampling of learning outcomes in the ORE is determined by, and checked against, a 'blueprint' for each diet of the examination. The diet-specific blueprint maps learning outcomes against specific items, stations, scenarios, exercises, etc, used in that diet. As well as demonstrating the extent of sampling, the use of a blueprint of this type ensures that all items and tasks are clearly aligned to the curriculum.

Standard setting

Given the different purposes and designs of Parts 1 and 2, each uses a different approach to setting standards. Both Part 1 papers use a modification of the Ebel9 method, while the Part 2 components use modifications of the Angoff9 method. Both approaches are based on the professional judgements of the standard setting panels, and are used to set 'criterion' (absolute) standards, rather than norm-referenced (cohort) standards. As such, a candidate will pass each examination component based on whether they have met the minimum standard required, irrespective of the performance of their peers on the same assessment. There are important consequences to this approach:

  • The pass mark for each examination is variable as it reflects the difficulty of the items, stations or tasks with which candidates are presented

  • The number of candidates passing or failing the examination will be entirely dependent on the number achieving the pass mark. In theory, an entire cohort of candidates could thus either pass or fail.

The ORE employs two additional methods as a check on the appropriateness of the standards and the reliability of the outcomes:

  1. 1

    Given the high stakes nature of the ORE, and the potential consequences for patients of false-positive outcomes (unintentionally passing candidates who are not fit for practise in the UK), the standard error of measurement (SEM) is added to cut scores derived from standard setting. The SEM is a numerical measure of how much measurement 'noise' exists within the range of scores produced by any one examination and, therefore, the reliability of each candidate's pass or fail outcome. On any assessment the outcomes closest to the cut-score are likely to be the most problematic and potentially unreliable. To control for false-positive and false-negative outcomes around the cut-score, and to ensure the defensibility of the standard, the SEM is applied to create a range (like a confidence interval) within which the pass and fail outcomes are considered 'borderline'.10 Candidates with scores in the borderline range in Part 1 are considered to have failed. In Part 2, a candidate with a single borderline score in one of the four components can pass the examination provided they have clearly passed the other three components. The value of the SEM in supporting decision making for borderline students has been reported.11

  2. 2

    Post-assessment, the Borderline Regression method12,13 is used as a check on the reliability of the Part 2 cut scores established using the modified Angoff method. The Board of Examiners for Part 2 reviews the cut-scores and resolves through discussion and negotiation any discrepancies between the outcomes of the two methods.

Standardisation

Each candidate at a particular diet of the ORE should have, as far as possible, an assessment that is fully equivalent to the assessments in all other diets of this examination. Achieving this type of equivalence (or comparability) is more straightforward in objective written tests, but where clinical tasks are involved there are three principal challenges to overcome. The first challenge is to select practical tasks that provide candidates with an equivalent test across diets. For this reason a great deal of care goes into the creation of tasks that sample relevant skills, are not so similar from one diet to the next that candidates can be coached inappropriately to pass them, and which can be assessed accurately by teams of trained examiners. Many tasks would, in a clinical setting, involve interactions with patients and the second challenge is how to standardise such patients so that candidates face similar tests of performance. The pursuit of fairness through standardisation of the assessment experience is a key reason why the ORE, unlike previous incarnations of the statutory examination, does not involve interaction with actual patients. Instead considerable effort is expended in the creation of realistic scenarios that make use of trained role players as standardised patients.14 The third challenge for the standardisation of clinical examinations relates to the role of examiners. Examiners' judgements in the face of identical candidate performance can and do vary. Great importance is attached to the selection of examiners who are familiar with the standard of first registrants and then to their thorough training and calibration so as to limit this variance as far as possible.

Part 2 examination components run over two days and the GDC and ORE providers are aware of instances of day one candidates sharing information with day two candidates. To mitigate the potential risks of candidates gaining insight to examination content and being unfairly advantaged by any collusion, there is a purposeful strategy of varying assessment scenarios across the two days.

Quality assurance

Both the GDC and the examination suppliers have quality assurance mechanisms in place; only those that are the responsibility of the GDC will be described in this paper.

The ORE Advisory Group (OREAG) is chaired by a senior dental academic who is supported by two Chief External Examiners, who are also senior dental academics, and two educational assessment specialists from non-dental disciplines. Other expertise is recruited on an ad hoc basis according to need. The Advisory Group is supported by GDC Examinations Team staff and reports to the GDC Executive. The OREAG is responsible for:

  • Quality assurance, including:

    • Consistency of standards and outcomes

    • Scrutiny of the examination process, ensuring it remains valid and appropriate

    • Transparency and fairness

  • Providing guidance on regulation and policy development

  • Reviewing and implementing suggestions for continuous improvement of the examination (quality enhancement).

The principal means by which the GDC monitors the examination is through its external examiners. The arrangements for external examining in the ORE are consistent with the requirements of the Quality Code of the Quality Assurance Agency,15 although, clearly, there are differences in the circumstances under which the ORE operates in comparison with assessment in a Higher Education Institution. External examiners are appointed from amongst the body of UK dental academic staff and clinical staff with close involvement in undergraduate education via a national recruitment and selection process. The external examiners receive induction, followed by yearly update training and biennial appraisal. In relation to every diet of the examination they:

  • Scrutinise examination items, artefacts, etc, prior to the diet, including involvement with standard setting and blueprinting

  • Attend the diet in person and monitor all aspects of delivery of the examination, with a particular emphasis on the standard applied

  • Submit post-diet reports containing their observations on the conduct of the examination.

A chief external examiner (CEE) oversees the work of the external examiners at each diet and collates their post-diet comments into a single report containing any necessary recommendations. The CEEs report is submitted to both the supplier and the OREAG. The supplier responds to the Chief External in its own report submitted to the OREAG. Post-diet supplier reports include the minutes of the Board of Examiners, attended by the Chief External Examiner, at which the examination is reviewed and the results verified. Suppliers also submit annual (Part 1) and bi-annual (Part 2) reports to the OREAG. Figure 2 illustrates the reporting structure. The OREAG meets four times per year and at each meeting the CEE and supplier reports are considered together in detail, the purpose being to monitor standards etc, as described above, and to confirm that there has been an adequate response to recommendations.

Figure 2
figure 2

Quality assurance reporting structures in the ORE

Representatives of the examination suppliers also meet with the OREAG on a routine basis, as part of the GDC's agenda for continuous improvement, innovation and quality enhancement. In recent years, the innovations that have been developed and implemented by the examination suppliers in partnership with the OREAG include: the introduction of electronic mark capture, the use of role-play in calibration of examiners, and routine analysis of assessment data using appropriate psychometric methods.

Conclusion

The primary role of the GDC is public protection and the ORE is designed to ensure that overseas qualified dentists, whose education and training has not been quality assured by the GDC, meet the minimum standards required for safe practice in the UK. This paper has outlined the key design characteristics of the ORE, and the associated quality assurance processes. The context of the ORE is very different from assessing dental students during and at the end of a five year full-time BDS/BChD programme. Nonetheless, the notion of equivalence between a candidate passing the ORE and one passing the final BDS/BChD examination, both of whom would be entitled to apply for first registration, is central to the purpose and design of the ORE. Whilst not every aspect of Standards for education,16 the GDC's benchmark document for UK dental education providers, is applicable to the ORE, much of it is, and a key role of the OREAG is to ensure compliance with the relevant standards.

The ORE has undergone significant change since its introduction in 2007. It will be further modified in the near future as it is adapted to meet the new challenges set out in Preparing for practice.2 This may prompt the use of alternative methods of assessment to ensure that the design of the ORE continues to represent current ideas of best practice in assessment, and the content appropriately samples across all the requirements set out for UK dentists in the twenty-first century.