Introduction

The General Dental Council (GDC) became involved in dental training months after its first meeting in July 1956.1 However, it was not until the 1984 Dentist's Act that responsibility for dental education became the GDC's responsibility.2 After its original curriculum The First Five Years3 in 1982, several iterations have appeared, the latest being Preparing for practice which redefines the required pre-registration learning outcomes for all registrants.4

Vocational training emerged as a result of a profession-wide concern that a bridge between student and professional life needed to be addressed.5 It began as a voluntary scheme before becoming mandatory in 1993 for all UK-qualified dentists wishing to work in the National Health Service (NHS).6 Regional schemes usually consist of 12-14 pairings of foundation dentists (FDs), and their educational supervisors (ESs). They work together in practice for most of the year, the vast majority of them within NHS general dental practice (GDP). A training programme director (TPD) is usually appointed to run each scheme and is responsible for their pastoral and educational supervision.

Vocational training is now termed dental foundation training (DFT), and has a curriculum based on specified learning outcomes against which competence is assessed. The first was published in 2007 by the Committee of Postgraduate Dental Deans and Directors (COPDEND),7 and was designed for a two year foundation programme. Since most new graduates only complete the mandatory first DFT year, COPDEND have recently published a draft 2015 curriculum8 which 'meets contemporary needs of new dental graduates in the critical period of transition to assured and proficient independent NHS practice'.9

Some authors have expressed concern that DFT is no longer a 'finishing school' for dentists, but is providing core practice and instruction which would previously have been delivered in dental schools.10 This is not a perception limited to the UK, with a similar lack of preparedness reported in other countries such as Hong Kong (based on the traditional UK undergraduate course),11 Australia and Canada.12 There is a perception by educational supervisors that undergraduate training has been 'diluted' and that new graduates entering DFT were not as capable practically as they once were.1 Forty percent of them considering the undergraduate curriculum to be poor in preparing dentists for independent practice.13

There has been limited research into the preparedness of new graduates for independent practice. Previous studies have sampled regions of the UK,10,14 or looked mainly at elements of general practice.6,15,16

In order to develop a holistic picture of new graduates' preparedness for independent GDP, across the whole GDC curriculum, there is a need to develop a contemporary questionnaire. This paper addresses that need and describes the development of the Graduate Assessment of Preparedness for Practice (GAPP) questionnaire.

Methodology

An extensive literature review of three key areas was carried out prior to developing the GAPP questionnaire.

The Academic Search Complete database was used to search the following terms:

  • Questionnaire and/or questionnaire development

  • Dental foundation training and/or vocational training

  • UK dental education

  • Preparedness for dental practice.

The first task in constructing the GAPP questionnaire was to review and understand the information requirement of the study.17 Since the aim was to establish a new dental graduate's preparedness for GDP, it was deemed appropriate to use the learning outcomes set out in the GDC curriculum Preparing for Practice4 which defines those competencies expected of an independent dental practitioner post-graduation. Preparedness could also be measured against other variables such as age, gender and the school of qualification.

Initial development of the instrument was completed by the researcher and their supervisory team and was subject to the University Ethics Committee. The supervisory team comprised the author, a DFT Training Programme Director, Dean of Postgraduate Dental Education, and Senior Lecturer in the university education faculty. It was subsequently subjected to vetting by an independent authority to the supervisory team, who was a Dental School Dean. The draft version was then ready for piloting.

The GAPP questionnaire was generated for both FDs and ESs, which were almost identical and comprised three parts.

Part 1 collected descriptive data, including gender, age, school of qualification and length of course (4 or 5 years for FD respondents). The ES questionnaire differed in that it included items on length of experience as an ES, and also if they had completed 'VT'.

Part 2 was based on the competencies set out in the GDC curriculum. The 154 learning outcomes were rationalised to 34 questions, a process that sought to reduce the number of questions, whilst retaining the domain boundaries.

The use of competency statements to develop a measurement of self-efficacy in this study was based on work described by Bandura.18 In order to contextualise the question in terms of self-efficacy, maintain focus on the question area and reduce the length of individual questions, a single question stem was designed to precede all questions that read: 'How well prepared do you feel for general dental practice in order to...?'

The ES questionnaire was designed to elicit a rating of their current FD on the same competencies, and the stem was modified to read: 'How well prepared do you feel your FD is for general dental practice in order to...?'

Questions were presented in the order that they appeared in the curriculum, and took the form of a continuation of the stem, for example: 'How well prepared do you feel for general dental practice in order to carry out an orthodontic assessment and discuss treatment options with the patient?' The questions are displayed in Table 1.

Table 1 The 34 Part 2 GAPP survey questions, preceded by the stem: How well prepared do you feel for general dental practice in order to...?

A 7-category rating scale was adopted ranging from completely unprepared, through very poorly prepared, poorly prepared, not well or poorly prepared, well prepared, very well prepared, and finally to completely prepared.

Likert-type scales were originally described with five responses19 significantly expanding the potential information over 'yes/no' or other dichotomous responses.20

Based on the literature review, it was considered important to balance validity, reliability and discriminating power. Seven categories increased the discriminating power,18 and is postulated to maintain low respondent stress, which accompanies a larger scale.21 An increased number of categories may have severely compromised the ability to appropriately name them. The category wording was designed to fit the assumption that the psychometric distance between them from neutrality was equivalent.22

Although odd-numbered scales may lead to 'drifting towards the mean' and mask positive or negative responses,6 the absence of a central category lead to respondent irritation and increase non-response bias.23

Part 3 was designed to allow respondents to expand on their previous responses and to elucidate areas of their undergraduate courses which they felt were particularly helpful or unhelpful in terms of their preparedness, and to ascertain their expectations of DFT.

Piloting the questionnaire

University ethical approval for the pilot was granted by the host university (STEM 026). Since the proposed population for the pilot was a local DFT scheme, permission of the local Director of Postgraduate Dental Education was gained with approval of the local IRAS contact.

Participant information and consent sheets were designed to introduce and explain the nature and relevance of the research and encourage participation. In addition to the pilot questionnaire and information sheets, a structured feedback sheet was issued to gain all participants' views on the GAPP questionnaire's content and format.

The documents were sent as attachments to an email to 14 FDs and 14 ESs (a complete DFT scheme), with clear instructions of how to return the feedback form by email to the author. Documents were sent in MS Word format to facilitate ease of completion.

The pilot study took place in June and since this was during the last quarter of their DFT year, we anticipated that it may influence their ratings of preparedness.

Data analysis

The quantitative categorical data from Part 2 of the questionnaire was coded to allow statistical analysis. Coding of 1 represented an answer of 'completely unprepared' through to 7 representing 'completely prepared'.

Data was processed using IBM SPSS (Version 20). Non-numerical Part 1 questions which were to become variables for statistical analysis required numerical coding, for example, gender was converted to 1 (female) and 2 (male).

Median scores with IQR were recorded for each question for FDs and ESs. Mean rank scores were also generated in order to compare ES and FD responses, which was done using the Mann Whitney U test for two unrelated variables, due to the non-normal nature of the data.

Mean rank scores were considered to be statistically significant if P ≤0.05.

Results

Response rate

The response rate was 86% for FDs and 71% for ESs.

Pilot feedback

Respondents reported that the GAPP questionnaire took an average of 17 minutes for FDs and 25 minutes for ESs; the range being 7–45 minutes.

The instructions provided were found to be clear and easy to follow by all respondents; the one comment received being 'very clear instructions' in the free text area provided.

All participants bar one felt the number of responses to part 2 of the questionnaire gave them a suitable scope to state their position. This ES commented that it was 'impossible to say if the restorations are long lasting'.

Free text suggestions for improvements to the format of part 2 were overwhelmingly positive. The constructive feedback from ESs included a suggestion that there should have been an additional column for comments after each question; a suggestion that removing the central Likert category would stop people choosing the 'simple' middle option, and a comment that questions were too long and multifaceted.

FDs were also very positive. Constructive feedback also included the suggestion of a comments column, and that if presented in landscape format the font could be larger.

Most respondents (86%) felt that part 3 of the questionnaire gave them adequate opportunity to express their feelings, although three respondents felt the wording of the penultimate question was ambiguous.

The GAPP questionnaire was altered as a result of the feedback. The page orientation was converted to landscape, which also facilitated the addition of a 'comments' column to part 2, allowing respondents to clarify the reason for their categorical responses. Wording of the penultimate question in part 3 was also amended.

GAPP questionnaire pilot results

20% of ESs respondents were female which contrasted with a predominantly female (75%) FDs proportion of respondents.

The median (IQR) and mode of all questions for FDs and ESs are displayed in Table 2.

Table 2 GAPP pilot survey descriptive data

FDs felt 'well prepared' for independent practice in 7 of the 24 clinical areas with 14 areas felt to be 'very well prepared'. They felt 'completely prepared' in prevention advice and administering local anaesthesia. They ranked lower ('not well or poorly prepared') in only one area (orthodontic appliance repair). They did not feel 'poorly prepared' in any clinical area.

In all areas of professionalism they felt 'completely prepared' while in the communication and management domains, they felt 'very well prepared' or 'completely prepared'.

The ESs also rated FDs 'well prepared' in seven of the clinical areas, with 16 areas rated as 'very well prepared'. The ESs felt the FDs were not 'completely prepared' in any area. They also felt that FDs were 'not well or poorly prepared' for orthodontic appliance repair.

The ESs also felt the FDs were 'completely prepared' in the ethical and legal area of professionalism with all of the other non-clinical areas rated as 'very well prepared' or 'well prepared'.

Comparison of ES and FD results

From the median (IQR) descriptive statistics in Table 2 we observe that the trend was that ESs tended to score lower than the FDs. This applied for 26 of the 34 questions, while four areas were rated the same; orthodontic assessment, acute patient management, drug prescription and TMJ management. Although marginal, the areas of diagnosis, safeguarding and surgical extractions were scored slightly higher by ESs.

When ES and FD responses had mean rank scores analysed statistically using the Mann Whitney U non-parametric test, there was only one statistically significant difference identified in the communication domain; patients and the public. ESS rated their FDs significantly worse than the FDS rated themselves in this area with P = 0.038.

Discussion

GAPP questionnaire results

The results appear to illustrate that FDs feel well prepared for independent general dental practice at ten months of DFT. This appears to be a view shared by their ESs. Despite a general trend for the ESs to rate FD preparedness slightly lower than the FDs themselves, this was only significant for communication with patients and public.

Orthodontic appliance repair stood out as being the lowest ranked competency area by both populations. This may be explained by the NHS GDS contractual changes that came into force in 2006 that excluded many general dental practitioners' (GDPs) ability to claim for orthodontic work on the NHS. We believe this has largely stopped the small amount of NHS orthodontics GDPs did prior to the contractual changes.

These results should be viewed with caution due to the pilot sample size.

Questionnaire validity

In simple terms, a questionnaire is valid if it measures what it purports to measure. Cronbach stated: 'One validates, not a test, but an interpretation of data arising from a specified procedure'.24

It was felt essential that the GAPP questionnaire was designed to facilitate capture of results in the following ways:

FDs reported their preparedness in (specific question area) as (Likert scale response) at this particular time in their postgraduate career; and

ESs reported that they feel their FD's preparedness in (specific question area) as (Likert scale response) at this particular time in their postgraduate career.

Content validity

In order to be content valid, a questionnaire needs to accurately reflect a specific domain of content. This concept requires careful consideration when constructing a questionnaire such as this.

Nunnally stated that content validity 'rests mainly on appeals to reason regarding the adequacy with which important content has been sampled'.22

Many qualitative studies use questionnaires to attempt to describe an abstracted criterion (see below) and questions are developed to collectively define that criterion. An excellent example given by Carmines and Zeller25 being a child's mathematics test including all forms of calculation (not just addition) in order to give a judgement of their overall mathematics proficiency.

With this GAPP questionnaire, we attempt only to describe self-reported preparedness by FDs, or preparedness perceived by their ESs, and not an abstract criterion such as competence.

This means that each question item in the GAPP questionnaire, when considered alone should accurately reflect that area of content.

One of the limitations of this GAPP questionnaire is the compound nature of some of the questionnaire's questions, subsequent to the stem, for example:

Question 8

How well prepared do you feel for general dental practice in order to...?

Appropriately manage the patient presenting in an unscheduled appointment, including management of acute orofacial trauma, infection and pain.

It is clear to us all that there are several skills within this area, and conceivably a respondent may struggle to provide a single response. They may feel 'very well prepared' to manage acute infections, but 'poorly prepared' to deal with orofacial trauma.

In couching the sentence within a heading of 'acute patient management' we hoped that the respondent would describe their overall preparedness in this area.

The alternative to this would be to have used each competency statement in Preparing for Practice as a separate question item, making part 2 of the questionnaire 154 questions long, instead of 34. Due to the inherent risks of questionnaire fatigue with so many questions, we opted to maintain high content validity.

In terms of the content validity of the GAPP questionnaire as a whole in measuring self-reported preparedness, we believe that because it is the only questionnaire to have incorporated all of the elements of the new GDC curriculum a valid perception may be gleaned, as opposed to other questionnaires which utilise limited (often only clinical) competencies.

Criterion-related (predictive) validity

One critical question that as yet remains unanswered relates to the ability of this GAPP questionnaire to predict new graduates' performance.

Further work is planned in order to establish the performance of new graduates, and how this performance relates to their self-assessment (or their ES's assessment) of preparedness in particular tasks. Then a clear relationship can be formulated between the criterion variable (performance) and their empirical scoring on the GAPP questionnaire.

There are clear benefits in doing so; surveying dental trainees or graduates at any level will allow a picture of their likely performance in vivo to be drawn, and decisions made as to their readiness to treat patients safely.

Construct validity

At this early stage in the development of this GAPP questionnaire, we have not attempted to develop theorised constructs based on responses to the questionnaire, but we have highlighted the potential benefits this may bring.

Questionnaire reliability

For the reasons outlined above, we believe the results from the GAPP questionnaire will be valid, if reported carefully and not extrapolated. But how reliable is it?

Due to the specific timing of the questionnaire use in this study, and the huge logistical implications of retesting and tracking responses this was not carried out.

Arguably, given the huge learning curve commonplace in DFT, even a small time period between retests may have introduced significant error in the assessment of reliability. How would we be able to determine if a higher score was usually given in a retest because of increased confidence or experience of the FD, or due to an unreliable test?

The issue of reactivity (scores changing due to prior exposure to a previous test) is enhanced significantly the sooner the retest is carried out.

Statistical analysis of Likert-type questionnaires can often help to indicate the degree of reliability. The statistical tests such as Cronbach's alpha26 or KR2027 are invaluable on a questionnaire design, where multiple elements attempt to represent a criterion or construct.

The use of statistical methods to analyse GAPP questionnaire reliability are unfortunately useless. More importantly their use is fundamentally flawed, and could lead to false assertions of its reliability.

Cronbach's Alpha26 necessitates the comparison of pairs of responses from the questionnaire, resulting in a score of internal consistency ranging from 1 (perfectly reliable) to 0 (completely unreliable). Thus, if several questions were concerned with self-esteem or some other abstract construct, answers to the questions should be similar, and alpha would generate a meaningful measure of reliability (to measure self-esteem).

In this GAPP questionnaire, each item reflects a very different series of competencies from the GDC curriculum. We are interested in the feelings of preparedness in these individual areas, rather than attempting to abstract the data to a construct such as 'general competence'. The danger of pairing curricular elements in such a statistical test is clear, and would give an alpha value which would be meaningless.

We intend to use the GAPP questionnaire to elucidate the feelings of preparedness of FDs within two months of DFTs commencement. We also propose to elicit their ESs assessment of their preparedness at the same time point by using the ES's version of the GAPP questionnaire. The questionnaires will be distributed and returned by post using the Training Programme Directors of the DFT schemes as the distribution point. Approval has already been granted by COPDEND for this.

Our suggestion would be that this questionnaire should be repeated annually as each new cohort of dentists enters DFT, in order to develop a picture of where they perceive their training to date has prepared them, and act as a stimulus to develop their DFT programmes prospectively.

Conclusions

GAPP is the first questionnaire to be published which can be used to establish self-reported preparedness of FDs and the reported preparedness of FDs by their ESs across all domains of the GDC curriculum.

GAPP appears to be a valid measure of preparedness for practice among graduates and their supervisors. The instrument is simple to complete and provides a useful analytical instrument for both self-assessment of preparedness and for wider use within dental education. It is one method by which those responsible for undergraduate and postgraduate training can compare graduates' competency based on objective performance under clinical assessment with students' subjective perceptions of competence, highlighting useful areas for support as FDs enter DFT settings. It also serves as a before and after measure for both FDs and ESs to assess how perceptions of the FD's preparedness changes during DFT.

The pilot results appear to show that FDs are well prepared for independent practice at ten months of DFT.

The GAPP questionnaire will be used to establish preparedness of new graduates from both ES's and FD's perspectives in a nationwide questionnaire involving all DFT schemes in England and Wales.

It may be that further work to criterion validate the questionnaire (as a predictive instrument), may allow it to be used as an indicator to help judge where focussed interventions within the continuing professional development of a dentist may be required.

Limitations of the pilot

The authors acknowledge the relatively small sample size of the pilot study, but were satisfied that the high response rate gave sufficient feedback on which to develop the final GAPP questionnaire.