Main

As consumers we readily differentiate between good and indifferent service and are well versed in articulating our views of it. There can therefore be few, if any, who have not experienced and been influenced by the cultural changes brought about by today's consumer driven competitive society. In the business literature, quality of service has been defined as conformance to managerial specifications.1 Although this may be readily assessed by such parameters as patient waiting times or proportion of students passing examinations the definition is unhelpful. This is because customers' expectations for a particular service are known to shape their assessment of the quality of that service.2 When there is a discrepancy between customers' expectations and management's understanding of these, perceived service quality suffers. This therefore presents a problem to the service provider for although the product such as a new crown or an undergraduate/postgraduate course conforms to specification, the manner in which it is delivered may colour adversely the patient's/customer's final perception of it. Such terminology and culture would at one time have been unfamiliar to both healthcare and academic communities.

Today however, we are all too familiar with quality initiatives such as the National Survey of Patient and User Experience3 and the completion of course questionnaires on attending an approved postgraduate course.4 One of the difficulties however, of such an approach, is knowing how to react to the results, which often fluctuate from year to year, even though the service has remained relatively unchanged! Such apparent paradoxes may be addressed by examining the experiences found in service industries where customers' expectations for a service are known to shape their assessment of the quality of that service.2 Berry et al.2 examined the expectations of 731 service customers using four different types of commercial service. They found that these covered five areas (tangibles, reliability, responsiveness, assurance and empathy) as summarised in Table 1. Such a model could potentially be customised to improve the experiences of participants on a postgraduate/undergraduate course or even patients attending a dental surgery. This work reports upon the development, application and use of such a model in the context of delivering an undergraduate course.

Table 1 Table 1

Materials and methods

A questionnaire (Table 2) based upon an industrial customer service quality questionnaire,5 derived from the work of Berry et al,2 was devised to assess the physical delivery of a course. This consisted of a series of statements to which the students could indicate their level of agreement by using a five point score (1 = Disagree, 3 = Neutral, 5 = Agree). It was administered to all second year dental students attending the last scheduled class of their phantom head based course concerning the management of dental caries. This involved the academic years 1997–1998 (46 students) and 1999–2000 (32 students). By multiplying, for each question, the total number of responses at a given level, summating this and dividing by the theoretical maximum score a percentage score for each question was derived. The mean of these scores across the whole questionnaire, for a given year of students, enabled the course delivery, as perceived by the recipients, to be classified according to the industry based quality standard:5 10–20%, Cruel and unusual punishment; 21–40%, You call this service?; 41–60%, Average but who wants average service?; 61–80%, Close only counts in horseshoes; and 81–100%, Service hall of fame candidate.

Table 2 Table 2

A separate questionnaire (Table 3) sought to ascertain the level of importance the students attached to each aspect of course delivery. For 1997–1998 students this was administered following the course and for the 1999–2000 students prior to commencement. A five point score (1 = unimportant, 3 = neutral, 5 = very important) was allocated to each aspect and an overall percentage score was derived for each year as described above.

Table 3 Table 3

A statistical comparison of how the perceived level of teaching/service provision matched the students' expectations was made. This also gave an indication of those areas of course delivery considered important by the students and a measure of year and gender influences upon this. Areas with a shortfall, between delivered and expected scores of ≤7%, were defined empirically as areas for improvement. Improvement measures, in areas identified retrospectively by the 1997–1998 students and considered important prospectively by the 1999–2000 students, were introduced for the year 1999–2000.

Results

For both years the questionnaire return rate was 100%.

Table 4 contains a classification of the various quality components that were examined. It also summarises and contrasts, for all years, both the expected levels of service and that delivered as perceived by the students. In addition, for each year, overall mean scores are also given. It should be noted that the service dimensions in this table are ranked in order of descending importance as perceived by the 1997–1998 year of students. For comparative purposes the rank order, as assigned by the year 1999–2000 is also included. Where more than one delivery question contributes to a particular service dimension the score given is a product of the responses to all the questions relating to that component.

Table 4 Table 4

For both years (Table 4) it is interesting to note that the students expect a high level of service for all the course aspects assessed with the exception of the tutors' appearance. Analyses of variance reveal no statistically significant (P > 0.05) differences in the overall expectations of students across both years, although clear differences in the ranking of the importance of the different service dimensions exist. On overall course delivery however, there is a statistically significant (P < 0.05) improvement in course delivery for the 1999–2000 year.

With respect to the shortfalls in delivery, defined empirically in this work as a difference of ≤ 7% between expectation and delivery scores, two areas are common across both years. These are the ability of the tutors to convey confidence (shortfall 1997–1998 = 14.3%, 1999–2000 = 7.6%) and help learning consistently (shortfall 1997–1998 = 13.2%, 1999–2000 = 7.6%). The reduction in the gulf between expected and delivered observed in these categories for 1999-2000 is not a statistically significant (P > 0.05) (One sample chi-squared test of raw data) improvement; but for the students affected must represent a step in the right direction. It is also worth pointing out that in 1997–1998 there were shortfalls in the areas of willingness to give assistance (7.4%), provision of caring individualised attention (18.7%) and the tutors willingness to provide prompt responses to learning needs (12.5%). In 1999–2000 these had all been reduced but, only in the area of prompt responses to learning needs, was this found to be statistically significant (P < 0.05) (One sample chi-squared).

Discussion

Before addressing the findings of the study it is appropriate to comment upon certain aspects of the study design. The population surveyed were those attending a course module under the direction of the author. The different number of students surveyed in each year was a reflection both of the number enrolled on the course and attendance on the day the questionnaire was administered. It should be noted that the approach to monitoring delivery quality was introduced on an evolving basis. This accounts for the differing time of administration of the questionnaire that sought to ascertain the level of importance the students attached to each aspect of course delivery (following the course in 1997–1998, after the course for 1999–2000). In the light of our experiences it would now be routinely administered prior to the course so that delivery could be tailored to match more closely the expectations of the students.

In the present study no attempt was made to assess the tutors' ability to convey knowledge. This was because this can be more appropriately monitored by both in course continuous assessment and performance in professional examinations. Delivery in the dimension of the tutor's ability to convey trust was also not measured for a satisfactory method of achieving this could not be found.

It is apparent that the students surveyed in this study expected a high level of service quality to be provided by their course tutors (Table 4). Although quality in industry is often defined as conformance to managerial specifications it is really the conformance to the definition of quality as laid down by the consumer/student that counts.2 The high level of expectation seen in this work mirrors that seen in service industries but, unlike the neat compartmentalisation of importance into each service dimension reported in such a situation2 no clear dimension is favoured by the students – the high expectations being maintained across all categories. This therefore makes effective delivery difficult, for in all aspects of course delivery excellence is expected. These expectations are higher than those encountered in many service industries2 and therefore represent a considerable challenge to the course tutors.

When the results of this survey are compared with educational research on factors that determine student perception of teaching quality6,7,8 similar dimensions are also highlighted. For example, class size,6 teacher response time7 and organisational environment8 translate well and are components of the industry dimensions of empathy, responsiveness and reliability. Furthermore the questionnaire's structure, in measuring expectations and delivery, does have a parallel in the more recently developed student satisfaction survey9 as developed by the Centre for Research into Quality at the University of Central England (Birmingham). This survey is based on answers, which account for students' experiences in two dimensions: importance and satisfaction. It is interesting to note how the different approaches from the commercial and educational worlds are so similar in their approach. This may be a reflection of how far consumerism has influenced today's society.

The gap in the academic years subjected to this survey was deliberate. In light of the results of the initial survey (1997–1998) time was required to adjust the course to improve the identified shortfalls. It was thought that by maintaining the same staff student ratio (2:12) and improving the access to a tutor during the class each student would have a more equitable share of the tutor's time. This was accomplished by introducing a simple queuing system with the result that a tutor's time was more evenly distributed across the class. This yielded improvements in the areas of: willingness to give assistance, provision of individualised attention and promptness of responses. A more tightly structured course manual also saw improvements in the tutor's consistency (Table 4). The overall effect was to improve significantly (P < 0.05) the perceived level of course delivery. It should be stressed that this was achieved with no alteration in course content.

The results of this study show that measuring the pre-course expectations, prior to commencing the course, enables course delivery to be matched to the expectations of the students and so improve the perception of its quality. This must be of benefit for it should reduce distraction and facilitate learning. Such an approach may also be of assistance in matching patient expectations to the delivery of a healthcare service. The industrial model reported here appears to have translated well to an educational context and should prove useful in continually improving course delivery.