Main

Clinical governance is a framework through which staff are accountable for improving the quality of services and safeguarding high standards of care.1 One component of this is peer review by which clinicians assess one another in relation to performance. This could be carried out in a number of ways and settings; examples of which include observation of clinical activity and analysis of outcomes against pre-defined performance indicators.

A referral based service between general dental practitioners or community based dentists in primary care with hospital based specialist services is well established in all dental specialties. In restorative dentistry an important component of such a service involves giving advice on patient management to referring practitioners. The usual way in which this is carried out is by an exchange of letters with the referral letter being used in the consultation appointment and a reply letter being generated in outcome.

For such care to be effective it is essential that communication between referring general dental practitioners and specialists is clear, unambiguous, relevant and realistic to ensure that the quality of care a patient receives is acceptable, particularly as in many areas patients have to travel considerable distances to see specialists. Communication between primary care practitioners and consultants is also a matter of concern for medical colleagues.2

There is very little published data on the quality of communication between referring dentists and specialists. A survey of 161 consultants in all specialties revealed that 76% of respondents felt that the standard of referral letters was adequate, 21% felt they were poor, and 2% felt that they were appalling.3 However these results appeared to be based on perceptions of the respondents rather than from data based on individual referral letters.

There seems to be even less available evidence in relation to the quality of letters written in reply by specialists, either in dentistry or medicine. One such study was undertaken by general medical, dermatology, neurology and gastroenterology colleagues in Amsterdam.4 In this study, a panel of four specialists and four general practitioners analysed 144 replies to referral letters. Letters were assessed according to quality and content, clarity, request for return to general practitioner care, time intervals between referral and consultation and between consultation and the specialist reply. There was much disagreement between judges about the quality of letters and it was acknowledged that standards are subjective.

The aim of the present study was to assess the quality and appropriateness of replies to practitioners from specialists and trainees in restorative dentistry. One limitation of a panel approach to peer review, either on the referral letters or the replies, is that there may be reluctance for potential participants to submit their letters. Issues of confidentiality may also give cause for concern. For this reason it was concluded that there may be limitations using such an approach in a peer review study. To overcome this, every participant in the study was involved in the assessment of all other participants' letters. Replies to referral letters cannot be assessed in isolation and features of the referral letter itself were also considered in the peer review study.

Methods

An outline plan of the proposed study was circulated to all Consultants and Specialist Registrars in Restorative Dentistry practicing in Scotland (25). The protocol was modified in response to some feedback. In the definitive study all participants submitted five pairs of referral and reply letters. There were no specifications made as to what letters were submitted nor the range of reasons for referral. To maintain patient anonymity the participants were asked to remove any identifying features of the letters including details of the patient, the referring practitioner, the centre in which the patient was seen and the individual involved.

It was agreed that two of the participants would take responsibility for the administration of the study. Once received, the pairs of letters were labelled with a randomly generated two digit code specific for each participant so that they were anonymised in the peer reviewing process. The pairs of referral and reply letters, with the exception of those that they had submitted themselves, were copied and sent to every participant in the study.

To make the individual assessments as objective as possible, proformas were designed which outlined a number of features of the reply letters which were judged to be important by two of the authors. This proforma was sent to each participant prior to the review for comments and feedback. Any issues in relation to the criteria were then resolved by discussion and agreement reached on the final proforma (Table 1). Where tooth notation was used, the system was noted by the reviewers. A number of other criteria were also included as part of the form, which represented information which did not eventually become the main part of the peer reviewing process but provided useful additional information (Table 2). These related to whether radiographic findings were required and whether the treatment may be suitable in a general dental practice environment. Each participant was also asked to give an overall rank to the quality of the letter on a scale with a score of 1 representing poor and 10 excellent.

Table 1 The criteria used in the peer reviewing of the reply letters
Table 2 The additional criteria used in the reviewing of the reply letters

Each participant returned the five pairs of letters from every other participant and their completed proforma reports. The responses to each criteria from the reviewers were summed and displayed for each participant. The responses to all the criteria in Table 1 were totalled and the positive, negative, not applicable and no responses were expressed as a percentage of the total. The responses to individual criteria were also analysed. The ranking scores on the overall quality of the letters were expressed as median scores and inter-quartile range. Statistical analysis of the rankings was undertaken using the Kruskal-Wallis test and appropriate post hoc tests (Mann-Whitney making allowance for multiple testing). Each participant was informed of the results of the peer reviewing process on their own letters.

Results

Seven clinicians (five consultants, two specialist registrars) expressed an interest in taking part in the study. They each provided five pairs of referral letters and corresponding replies (resulting in 35 pairs of letters). Because participants did not receive their own letters for peer review, they each received 30 pairs of letters for assessment. A total of up to 210 responses were received for each question (30 letters to 7 participants). In a small number of replies the reviewer may not have supplied an opinion on whether the criteria was matched; these are shown on the bar charts as a no response.

A comparison was made between each clinician of the percentage of criteria from Table 1 that were attained in the reply letters (Fig. 1) The reply letters of individual clinicians showed between 61–89% of positive responses to all criteria. In some instances the reviewers judged the criteria to be not applicable to the letter being reviewed. In a very small number of cases there were no responses noted by the peer reviewers. The results for individual criteria were analysed and generally the responses reflected the results to all criteria combined. However there were particular features of interest in relation to some questions. The bar charts in Figures 2,3,4,5,6 show the reviewers' judgements for some of the individual criteria. For any one criterion there are a possible 30 responses (six reviewers looking at five letters each). Figure 2 shows the responses to whether comments were made about the medical history. Over half of the responses from the reviewers indicated that at least four clinicians did not appear to make comments about this in a significant proportion of the letters, with one not making comment in any letter. The reviewers indicated (23–30 positive responses out of a total of 30) that the clinicians made clear treatment plans (Fig. 3). Similarly as shown in Figure 4, there were between 25 and 30 positive responses (out of a total of 30) from the reviewers as to whether the clinicians answered the primary request of the referring practitioner in their referral letter.

Figure 1: The percentage of responses for each participating clinician to which the peer review process judged whether these were attained (Yes) or not attained (No) in the reply letter.
figure 1

The bar chart is based on the combined responses for all criteria shown in Table 1. Some criteria were either judged to be not applicable by some assessors or not commented upon (No response).

Figure 2: The number of responses (out of a total of 30) for each participating clinician to which the peer review process judged whether these were attained (Yes) or not attained (No) in the reply letter in relation to a comment on the medical history.
figure 2

This criterion could also be judged to be not applicable by some assessors or not commented upon (No response)

Figure 3: The number of responses (out of a total of 30) for each participating clinician to which the peer review process judged whether these were attained (Yes) or not attained (No) in the reply letter in relation to the provision of a clear treatment plan.
figure 3

This criterion could also be judged to be not applicable by some assessors or not commented upon (No response)

Figure 4: The number of responses (out of a total of 30) for each participating clinician to which the peer review process judged whether these were attained (Yes) or not attained (No) in the reply letter in relation to whether the primary request of the referring practitioner was answered.
figure 4

This criterion could also be judged to be not applicable by some assessors or not commented upon (No response)

Figure 5: The number of responses (out of a total of 30) for each participating clinician to which the peer review process judged whether these were attained (Yes) or not attained (No) in the reply letter in relation to whether the same tooth notation as the referring practitioner was used.
figure 5

This criterion could also be judged to be not applicable by some assessors or not commented upon (No response)

Figure 6
figure 6

The tooth notation used in the referral letters (a) and the reply letters (b)

Where tooth notation was used in both the referral and reply letter, only between 1 and 9 responses (out of a total of 30) indicated that it was the same (Fig. 5). In many cases the responses indicated that different tooth notations were used in the reply letter compared with the referral letter. The referral letters used all three forms of tooth notation with the Palmer system being most frequent (Fig. 6a). In the reply letters all three forms of notation were used with the FDI being used most frequently (Fig. 6b). In some letters teeth were not notated giving rise to the 'No response' bars.

In relation to the additional criteria (Table 2), the reviewers judged that in the majority of letters submitted, radiographic findings were required. When this was the case, the clinicians usually made comments on these. For one clinician the reviewers indicated that radiographic findings were more often not required. The responses of the reviewers on the appropriateness of the cases for treatment in general dental practice did not reveal major differences between the clinicians, with a majority of responses indicating suitability.

The results of the peer reviewing process on the overall quality of the individual letters is shown in Figure 7. Ninety-nine per cent of the returns were ranked. All median scores ranged between 7 and 9, although there was some variation as a result of the individual reviewers. A Kruskal-Wallis test showed a significant variation between clinicians (P < 0.0005). Post-hoc analyses using the Mann-Whitney test showed significant differences between clinician 4 and all other clinicians in the peer rankings (all P values < 0.0024, the critical significance level making allowance for multiple testing between each clinician.5 There were no significant differences between any of the other clinicians.

Figure 7: A box-and-whisker plot showing the rankings on the five letters for each participant in the peer reviewing process.
figure 7

The boxes show the inter-quartile range with the heavy black lines in the centre representing the median rank

Discussion

The peer reviewing process of this group of reply letters from specialists to practitioners demonstrated that individual clinicians generally conformed with the criteria that were designed for the study. There were no obvious differences between the two specialist registrars and the other participants. It may be questioned as to why factors such as courtesy featured in the list of criteria considered. However this is an area where particular sensitivity may be required and has certainly been of concern in communication between medical specialists and general practitioners.4

It was clear that some of the letters seen were very detailed and while answering the referral request may have contained additional information not directly relevant to the nature of the referral. It would be of interest to see if referring practitioners find this to be of help, although a study carried out in relation to cancer care suggested that reply letters commonly included more information than recipients wanted.6 Certainly there is an opportunity for education in this line of communication and this should be a two way process.4 Furthermore many specialists regard letters as being comprehensive accounts of their findings, particularly as it becomes an important part of the patients' records.4

When considering individual criteria, some of these will be more important than others to the referring practitioner. In the present study the reviewers felt that the treatment plans were generally clear. The reply letter in many cases answered the primary request of the referring practitioner. In a panel peer review study on replies to specialists in medical specialties to general medical practitioners, there was agreement that 55–60% of reply letters answered the reason for referral very well and approximately 20% moderately well.4 The results from the present study do not compare unfavourably with this.

There were two areas identified in which the reviewers found possible deficiencies in relation to the agreed criteria. Some participants may need to give further thought in relation to including findings from the medical history or at least indicating that there were no relevant factors that would affect the provision of care. The second main issue is that of tooth notation. There were clear discrepancies between the notation used in the referral letters and those in the reply letters. There could be many reasons for this. Any practitioner may have used a particular notation system for many years, whereas staff in institutions may have an agreed policy of how to notate teeth. The use of different forms of tooth notation between the referring practitioners and specialists allows the possibility of error in the execution of treatment plans. The Palmer system is still popular in the UK. The FDI notation has a more international understanding. More recently, deficiencies in the Palmer system for electronic communication have been stated and an alternative system converting the quadrants into alphabetical descriptors (eg UR, UL, LL, LR) has been adopted in dental publications.7 Practitioners will notate teeth in the way in which they have been trained and since many letters are hand written the Palmer system poses no problem. However the responsibility should be with the specialist to ensure that reply letters make the tooth notation clear. Since the specialist also may have a preferred way of notating teeth, the issue may be solved by inserting a descriptor of all tooth notation systems in the reply letters. This would reduce the chances of errors arising.

The overall peer ranking process was a separate subjective judgement made by each participant on their peers. Individual criteria did not form a part of this analysis, although the assessors may well have been influenced by them in making their judgement on the overall quality of the letters. The overall ranking process showed that the reviewers generally felt that their colleagues' letters were of a favourable standard. No clinician was judged to be poor by their peers. The significant variation observed in the rankings could be accounted for mainly because there were differences between clinician 4 and other colleagues although the overall median level of seven was still very satisfactory. It is of interest to consider why there was this difference. One explanation is that the criteria were not adhered to. Although clinician 4 had slightly lower values with respect to all criteria it seems unlikely to provide a full explanation. There was a larger variation in the rankings for clinician 4 and it may have been that one or more of the letters posed problems for the reviewers. Alternatively it may reflect differences of the style of letter of one colleague with their peers. It is our view that a ranking of this nature should not be used as a tool to judge performance as peer review, when used in this way, is subjective. However, an important aspect of peer review is that individuals should reflect on the findings and consider if there are aspects of their practice they may wish to modify. This applies to everyone involved in the study and it would be appropriate to close the audit loop by repeating the study again after a period of time to see if clinicians have modified their practice in response to the peer review.

The additional criteria in Table 2 did not eventually form part of the peer reviewing process. The necessity of reporting on radiographic findings may not be relevant to every patient. For example, the non-inclusion of radiographic findings for an edentulous patient need not be regarded as an undesirable feature of a reply letter, because in many cases they are not appropriate. The provision of treatment in general dental practice and under National Health Service provision may have reflected the subjective views of the reviewers. Often specialists will need to communicate further with the referring practitioners themselves to explore this. For these reasons it was judged that the criteria in Table 2 did not give fully objective peer review and were not used in the overall analysis shown in Figure 1.

In this study no specific details were given to the participants as to the type of letter to be submitted. This could have resulted in some bias on two fronts. The first is that because the number of letters and participants were limited, the peer review did not necessarily cover a representative range of the common referrals in relation to restorative dentistry, and the participants may not be representative of a larger body of specialists. However, even with this number of participants, significant time was required for each participant to read the sets of letters. Furthermore analysis of the large amount of data that the peer reviewing process generated also required considerable time. There will be significant implications for time and resource if such a study is carried out on a much larger scale. The second issue is that individual participants often have sub-specialty interests and this may attract a specific type of referral pattern. This could be addressed by a larger peer review study involving more centres, or in individual studies addressing each sub-specialty. Again studies of this kind have significant resource implications.

This study has focussed on how specialists in restorative dentistry have judged each other in relation to the content and structure of reply letters to general dental practitioners. A possible next stage of a study of this nature would be to involve the practitioners who receive such letters in the process of peer review. Specialists may have particular ideas amongst themselves as to how the management of patients should be undertaken. There may well be similarities in their approach because of the training that they have received. However the perceptions of colleagues in practice may be quite different. There is very little known about the uptake of treatment plans by general dental practitioners but if a treatment plan is not suitable for the practice environment obvious difficulties can arise.8 In relation to this, one aspect of reply letters that should be considered is that the specialist practitioner may be unaware of the clinical environment in which the general dental practitioner practices. There have been a number of ways in which such issues have been addressed. More direct links between practice environments and the referral centre could be achieved by the use of video consultations.9 Alternatively outreach in which specialists visit the practice to see the patients would offer clear advantages in relation to communication.8 However both approaches are likely to have significant resource and logistic implications. It would seem for the immediate future that one of the main lines of communication will remain in referral and reply letters.

In conclusion the process of peer review carried out in the present study has shown that colleagues generally made favourable judgements on their peers. However significant areas in which communication can be improved have been identified, particularly in the use of tooth notation. There would be merit in revisiting this to determine if practice will change in response to these findings.