Main

This paper is about assessment in postgraduate dental education. By 'assessment' we mean 'measuring progress against defined criteria'. Good assessment is 'relevant' in that it reflects the training period in terms of the curriculum content and the knowledge, skills and attitudes designed to be developed. Good assessment is able to indicate future success ('predictive validity'). It should also have standards that are applied uniformly across settings and time ('consistency'). Further, good assessment makes efficient use of resources (ie is 'cost effective').

The developments planned in the assessment of postgraduate dental education from 1993 onwards (in the wake of the Calman Report1) have only been partially implemented to date. The current changes are evolutionary in that they seek to build on past practice and tend to be based upon the work of small development groups which then disseminate good practice.

At the specialist level, the Report of the Chief Dental Officer (CDO)2 in 1995 proposed that higher specialist training should be shorter, better structured ('seamless', modular, aims-led), and more flexible, whilst maintaining high standards. The competent authority, the General Dental Council (GDC), would set these standards.

Regarding the pre-specialist training level, the CDO's Report2 made recommendations for general professional training. It suggested that young dentists should undertake an initial two-year period of general professional training. This would be in both primary and secondary care settings so as to support all career options at the end of general professional training. Although widely supported,3,4,5 this proposal has unresolved implications for assessment.

Currently in postgraduate dental education, there is some evidence of the kind of mixed model of assessment that is evolving. There is an increasing interest, for example, in the use of logbooks for recording experience. However, there are no formal methods for ensuring consistency in the ways that the logbooks relate to the quality of experience and levels of achievement. There is therefore, a danger that the desire to make assessment more closely related to working contexts and more patient focused is inadequately supported by assessment practice.

An awareness of this danger has given rise to this paper. In it we make suggestions for the improvement of assessment in postgraduate dental education. These recommendations are based on a study which evaluated the strengths and weaknesses of the existing assessment systems. The study considered the assessment of postgraduate dental training across primary and secondary care, focusing on relevance, consistency, and cost-effectiveness as factors essential to 'good' assessment.

Findings from the study have been reported elsewhere.6,7 Therefore the method and a summary of findings are reported only briefly here. The specific purpose of this paper is to make recommendations for development and suggest proposals for change in the assessment of postgraduate dental education.

Method

The study was conducted in four overlapping phases over a period of one year starting in March 1998. Based on a review of the literature and three interviews, the first phase mapped the current provision of postgraduate dental education and its assessment in vocational, basic and specialist training, including examinations and inspection visits.

In the second phase a more detailed study of assessment in practice was undertaken in order to explore the ways assessment is experienced by assessors and trainees. A range of postgraduate training programmes and placements (in primary and secondary care, and in general and specialist training) were selected from the West Midlands Deanery for more detailed study of the policy and practice of assessment. Published course curricula and examination syllabi, trainee log-books/portfolios and other assessments and records were gathered and analysed. Semi-structured interviews were conducted with trainers and trainees in the West Midlands Deanery. In secondary care, this involved five consultants, 13 HO/SHOS (including those undertaking the formal, integrated GPT 'package'), and 13 Specialist Registrars. In primary care, meetings of vocational trainees (VDPs) were observed, and interviews were held with the organisers responsible for the GPT package. In addition, VT Advisors were consulted at their national conference, and four were followed-up individually.

The third phase investigated the systems used to ensure effective management of assessment, including inspection procedures. Opinion from those key informants identified in phase one was sought through semi-structured interview and expert panels in order to investigate the current position and potential for development of the management of assessment at a national level. Those consulted included representatives from the General Dental Council (GDC), Postgraduate Dental Deans and Deans of dental schools, the Royal College of Surgeons of England, the Hospital Recognition Committee, two Specialist Advisory Committees, vocational training (VT) advisors, and the Committee for Vocational Training (CVT).

In phase four the earlier phases of the work were analysed and reports and recommendations prepared.

Findings

A summary of overall strengths

  • The study found high degrees of commitment and experience from within the profession brought to issues of assessment.

  • There is a background of recent commissioned work, drawing attention to training and assessment issues.8,9,10 These works reflect interest in assessment within the profession, and provide good quality advice about technical aspects of assessment.

  • There are a number of occasions for sharing experience in training and assessment, for example, through meetings of Postgraduate Dental Deans and VT trainers.

  • The overarching role of the GDC provides a quality framework for monitoring the consistency of formal aspects of assessment.

  • The National Faculties of Dental Surgery and General Dental Practitioners co-ordinate and maintain standards in formal parts of the assessment system (ie the faculty examinations).

  • The MGDS (Membership in General Dental Surgery) and FFGDP (Fellowship of the Faculty of General Dental Practitioners) diplomas are examples of broad-based assessment. Both include a practice visit and use of patients in the assessment. Assessment for the FFGDP includes a patient satisfaction survey and use of video recording to demonstrate interpersonal skills. Both are advanced qualifications available to experienced practitioners and do not mark the end of a period of formal training. They may however, be models of how college assessments might develop in future.

A summary of overall weaknesses

  • There is an emphasis in parts of the assessment system on recording numbers rather than assessing quality.

  • There are several forms of unregulated assessments, such as references, interviews and 'grapevine knowledge'. The criteria for these are not explicit.

  • The system of assessment needs to be more open and transparent so that the standards and procedures for assessment become clearer to the public.

  • The interface between training and assessment is sometimes unclear, particularly between appraisal and assessment and between formative assessment and summative assessment.

The purpose of our evaluation was not only to identify strengths and weaknesses in the current assessment of postgraduate dental education, but to offer recommendations for modification based on those strengths and weaknesses.

Recommendations

In this section we set out recommendations for the further development of the assessment of postgraduate dental education. There are three broad areas in which development is recommended:

  • a competence-based model of assessment;

  • distinguishing assessment of the trainee and analysis of the trainee's educational needs;

  • quality assurance.

Of these, the introduction of a competence-based model is the most significant.

A competence-based model

Defining the competencies, or range of abilities that a trainee can demonstrate at each stage of training would have several advantages. It would be the basis for identifying distinctive characteristics of different periods and types of training. It would help identify gaps, overlaps and repetitions in curricula and assessments. It would provide a clearer connection between successive periods of training, and a more explicit relationship between periods of training and the associated assessments. The trainee would be assessed on the key competencies that they and their trainers had identified and had worked towards, during the particular training placement. It would more clearly distinguish between necessary periods of experience and the standards needed for progression to the next stage of training.

A competence-based model would give a stronger underlying framework of assessment for recording in logbooks and clearer guidance for trainers in their analysis of the learning needs of trainees. It would require that the aims of the curriculum and its assessment are clear to both trainer and trainee. The model would also require valid methods of assessment, including assessment of some skills and attitudes not addressed directly within current assessment procedures.

A competence model would be particularly valuable for pre-specialist general professional training. This two-year period, often undertaken on a self-constructed basis (rather than a formal, integrated scheme) provides challenges for assessment since it includes experience in both primary and secondary care. The primary care period of general professional training is typically the VT year. For the period in secondary care the recent graduate takes an HO/SHO post. Each setting has its own approach to assessment. Currently there is no overall assessment of general professional training, apart from the national formal examination of the Faculties of Dental Surgery, the MFDS (Membership of the Faculty of Dental Surgery).

Competence models are not without their critics, however. Fish and Twinn,11 for example, argue that such an approach does not account for what and why trainers do what they do. Also, it does not show whether the trainee is refining practice through critical reflection on experience, or just going through the motions. They go on to suggest that such a model is: 'unable to recognise the essentially incomplete, uncertain, and collaborative nature of professional activity, ignores professional judgement and risk-taking, and takes no account of the moral dimensions of practice'. Harden et al12 also identify 'fierce opposition' to the similar notion of outcome-based education (OBE). Critics of OBE argue that it waters down academic content and conflicts with the 'wonderful, unpredictable voyages of exploration that characterise learning through discovery and inquiry'.13 Yet, Harden12 and his colleagues argue that 'in medicine we cannot afford the luxury of ignoring the product', and such a 'voyage' could be inappropriate, and further that OBE can embrace a range of outcomes anyway. After all, a dentist needs to be competent, and to perform competently. They argue that outcome-based education can accommodate all these factors, whilst also providing relevance and accountability through a 'clear and unambiguous framework' for curriculum planning and assessment.

Thus we argue that a competence-based model is a way forward in meeting the stated policy aims for postgraduate dental education. Competence-based assessment would provide a publicly more transparent statement of standards, while addressing the needs for a broader assessment-base and ways of applying consistent standards across different assessment settings.

Possible features of a competence-based model

A competence-based model requires a common framework for specifying learning outcomes. This would set out the required competencies in areas such as:

  • scientific knowledge

  • clinical skills

  • relationships with colleagues

  • communication with patients

  • appropriate attitudes to patients and colleagues

  • an evidence-based approach to practice.

A competence model would enable a wider basis of assessment, including and giving a clear weighting to, communication skills, particularly with patients, and effective long term patient care. The Good Assessment Guide10 provides comprehensive coverage of a variety of assessment methods with their advantages and disadvantages. Common choices include objective structured clinical examinations (OSCEs), standardised patients (SPs), computer-based examinations/simulations, video taping of consultations (as with the FFGDP), peer rating and self-assessment.

Defining the competence expected in each period of training would involve a close examination of the 'fitness for purpose' (validity) of assessments, and there would need to be further development of 'authentic' assessments which match the realities of working contexts as closely as possible. Such assessments would be work-based and patient-focused. These authentic assessments would include those carried out in the context of actual practice, simulations of working situations, and assessment-based on case histories. A likely development would be increasing use of case studies developed by trainees related to their patients.14

A competence model facilitates the development of credit accumulation, whereby credits for particular elements of training can be built up through periodic assessments. This can help to provide structured feedback on progress and incentives for each element of training. It also provides clear criteria on which periods of training need to be repeated or extended in order to meet standards. Credit accumulation provides the overall structure whereby training from different settings can be coherently viewed as part of an overall period of training.

A competence-based model would also challenge current training which is based on the requirement that specified periods of time are served. 'Time-serving' was strongly defended by most who gave information and views in our study. They defended maturation periods in which experience is extended. Training and assessment which is based on time-serving may also be the most straightforward system to administer and monitor in terms of completion of minimum requirements. Yet in a competence-based model the levels of competence may be reached by different trainees at different rates. Some become competent very quickly, most get there in the end, while a few never do so.

Our specific recommendations therefore would be for further development of competence frameworks for assessment within all levels of postgraduate education. This would help to give flexibility in location and delivery of training. It would maintain consistent and transparent standards for assessment so enabling NHS dentistry to meet public expectations of accountability.

Distinguishing assessment of the trainee and analysis of the trainee's educational needs

There should be a clearer distinction between needs analysis (the identification of educational and training needs) and the assessment of trainees. Analysis of the trainee's educational needs is something that should take place with the trainer at the beginning of each training post. Learning needs should be related to the competencies designed to be achieved during the training period. In distinguishing assessment from needs analysis, clearer criteria for assessing the VT and HO/SHO training would be needed. Assessment specifically would involve some formal assessment of the HO/SHO and VT periods of training, and a requirement for satisfactory completion for certification of VT. This might also be linked to an overall assessment for general professional training which could help to structure and provide incentives for this important development.

VT and HO/SHO trainers would benefit from fuller training in both needs analysis and assessment. It may be appropriate to introduce differential levels of supervision of HO/SHOs depending on assessed levels of competence. This could result in reduced supervision for some. At present part A of the MFDS (or part 1 of the MFGDP) is informally used in career progression. The marker of satisfactory completion of general professional training should also be appropriate to those who do not wish to pursue a specialist career. Without proper assessment of VT and HO/SHO trainees it is difficult to identify under-performing trainees. With robust assessment in place, those whose performance or ability is judged to be seriously deficient might in the future be referred to the performance review scheme, as proposed by the GDC, for further training.

Quality assurance

The quality assurance systems currently in place have a key role to play in maintaining consistency and relevance of assessments. The effectiveness of this role depends upon having clear and agreed criteria for inspecting assessment systems and assessment practice. In turn, that clarity depends upon there being agreed criteria for the assessments themselves. The developments recommended above would suggest some refocusing of quality assurance procedures to give more detailed attention to the criteria for assessments.

Conclusion

We identify specific proposals which might be considered by national regulatory bodies and education providers and organisers within the dental profession.

We recommend that the national regulatory bodies consider the following proposals:

  • To address the policy aims of postgraduate dental education, the overall management of assessment at the national level needs to be strengthened. This would involve moving beyond the evolutionary model of development to a planned system of assessment across the different types of training (for example, VT, HO/SHOs and SpRs).

  • A competence model of assessment through specification of learning outcomes and required knowledge, skills and attitudes would make the overall assessment system more coherent and transparent. The development of such a model, while building upon current pilot projects, needs national co-ordination to ensure coherence and consistency.

  • A competence-based model should be designed to support a widening of the assessment base with greater scope for work-based and patient-focused assessments. There is a need, for example, at all levels of postgraduate training, for clearer weighting of communication skills, particularly with patients.

  • Clearer criteria for inspecting assessment systems and their application should be built into the quality assurance procedures.

  • The system of assessment needs to be more open and transparent so that the standards and procedures for assessment become clearer to the public.

For the education providers and organisers we suggest the following:

  • Clearer criteria for the assessment of trainees in VT and HO/SHO training, including a requirement for satisfactory completion for certification of VT.

  • The assessment of trainees in the VT and HO/SHO periods of training are integrated into an overall assessment of general professional training: a common assessment reflecting this common period of training. This could help to structure and give incentive to this important development.

  • Fuller training for VT and HO/SHO trainers in both the analysis of trainee's educational needs and assessment.

  • The use of assessment to determine differential levels of supervision of HO/SHOs depending on assessed levels of competence.

  • The introduction of stronger incentives into assessment procedures to acknowledge high performing professionals – trainers, trainees or inspectors.

  • Greater attention to the amount of resources used in assessment and, to improve accountability, more data on the cost-effectiveness of assessment.