Main

Clinical Governance was introduced to the National Health Service (NHS) in 1998 as:

A framework through which NHS organisations are accountable for continuously improving the quality of their services and safeguarding high standards of care by creating an environment in which excellence in clinical care will flourish.1,2

Clinical governance is about developing the fundamental components required to facilitate the delivery of quality care and delivering a no blame, questioning, learning culture, excellent leadership and an ethos where staff are valued and supported as they form partnerships with patients.3 It was introduced because quality initiatives such as medical and clinical audit were being criticised as professionally dominated and somewhat insular activities whose benefits were not readily apparent to the health service or patients.4

Clinical governance offers a means to integrate previously rather disparate and fragmented approaches to quality improvement.3 Its overall aims are to establish a quality assurance programme and to develop and define professional and executive responsibility. These new responsibilities lie ultimately with the Chief Executive and this has resulted in Trusts putting in place systems to address the central reporting and control requirements for governance. Individual services have been required to address the issues raised as a matter of priority. Clinical governance is conceptual in nature, but encompasses many themes. These are often in place or under development in a compartmentalised fashion within Trusts and Clinical Directorates (for example risk management, continuing professional development, clinical supervision). A recent paper in the British Medical Journal described how it had been used to improve postnatal depression services, to provide legible, accurate and timely discharge information in a urology service, to reduce delays in an adolescent mental health service, and to identify and remedy system flaws in an ambulance service.3 In many primary care trusts, this work has grown from medical audit and most now have clinical governance facilitators.

In dentistry, clinical governance has been described as a liberating experience: a chance to step off the treadmill for a while and to adjust lives and working conditions to make them more interesting, more enjoyable, less stressful and more effective for the profession and the people with and for whom we work.5 It should be seen as more than simple check-lists and it has been suggested that to achieve its long-term objective it must be process driven.6

Health organisations need to have a realistic appreciation of how present performance compares with that of similar services and best practice standards.3 In the North West, a Clinical Directors' Group exists to support and promote dental service development. It proposed that, in order to bring the various strands together and to provide a structure to work within, a universally applicable standard for clinical governance should be developed. Discussions within the Group revealed that services varied widely, and were frequently subject to significant structural and operational change. A model was needed which provided a standardised framework, yet was robust and effective without being rigid and inflexible to local variations between services and change over time. In order to allow for this, each service needed to determine for itself the way in which it met the requirements of the model.

The four WHO components for clinical quality assurance,7 which were considered to be the foundations of the model, are:

  • Professional Performance (technical quality)

  • Risk Management

  • Patient (Customer) Satisfaction

  • Resource Use

Members of the Group were able to draw on experiences with various other models or bodies for quality assurance and clinical risk management and incorporate these into the development process. These included:

  • ISO 9000:2000

  • Investors in People (IiP)

  • Clinical Negligence Scheme for Trusts (CNST)

  • Commission for Health Improvement (CHI)

An initial draft of a unified clinical governance model was developed by combining, adapting and modifying aspects of these models, standards and organisations. This was then evaluated by a working party consisting of the authors and four members from the Group.

The model also had to be capable of measuring progress in service development. The approach adopted was based on the RAID3,8 (Review, Agree, Implement and Demonstrate) model used by the NHS Clinical Governance Support Unit, with its emphasis on an iterative process of improvement.

The model

The final model (Table 1) consists of three main areas that define structure (components 1 to 3), control the process (components 4 to 11) and assure the outcome (components 12 to 14), each component having its own series of indicators. Most of the components are process-focussed. This emphasis is deliberate since it is a defining principle of a quality assurance approach that if a system can ensure that its processes are correct and well controlled, then the desired outcomes will result.

Table 1 The model

Flexibility is achieved because each service is free to determine for itself the most appropriate way in which to meet the requirements of any individual indicator. In other words, the model defines what should be in place while the service defines how it should be achieved. This is the inherent strength of the model. Once the requirements of all the indicators for a particular component are met, the service is deemed to have achieved the standard required for that component.

The RAID system requires baseline assessment, subsequent development of action plans and a review of progress as part of the process of organisational development. Measurements were acquired by creating a service achievement score for the 14 components. Each score is a percentage based on how far the requirements of the individual indicators for that component are met.

To carry out an assessment, each indicator is scored in turn. Evidence, in the form of documents, policies and procedures used within the service, are considered to determine whether the requirements of the indicator have been achieved. A value is allocated as follows:

  • 0 Indicator not addressed

  • 0.5 Indicator partially addressed. There is implicit evidence in a series of associated documents/actions

  • 1 Indicator unequivocally addressed. There is evidence of explicit documentation or activity related to that section

The value for each indicator is then weighted to emphasise those that are more critical. The weights used were derived by the authors and then revised by the working party. The total score for each component is calculated as a percentage of the maximum potential weighted value.

The scoring system is available as a spreadsheet containing the pre-set weights and formulae necessary to calculate the percentage achievement scores for each of the individual components. Figure 1 shows the calculation for one of the components using this spreadsheet.

Figure 1
figure 1

Calculation of the percentage achievement score for Component 6 of the model

Verification

To test the model, one of the authors (RSM) assessed his service in conjunction with his clinical director. The second author (LH) then acted as an external assessor examining the supporting evidence presented to support the scores allocated. Where necessary, the scores were adjusted following this process. This stage was designed to 'validate' the scores that were allocated on a self-assessment basis. This resulted in a final set of externally verified achievement scores for each component – this represents the baseline assessment.

The pilot service subsequently held a clinical governance development day to which representatives of all staff groups from the service were invited. The model and baseline assessment was presented, and areas with low scores discussed. Following this, a clinical governance development action plan was developed. After 12 months it is envisaged that the service will repeat the assessment process to demonstrate:

  • Progress in the low score areas.

  • That standards have been maintained in high score areas.

Based upon this, the service will review and re-define its action plan.

Discussion

A service using this approach can easily carry out a number of functions including:

  • Establishing a baseline

  • Identifying 'weak areas' to support action planning

  • Enabling re-scoring to demonstrate progress and identify slippage in previously highly scored areas.

The nature of the scores must be considered carefully. Because of the initial values used (0, 0.5 and 1), the final scores for each component are ordinal in nature. They are comparative only on a 'worse', 'same' or 'better' level.

While it is possible to calculate a single overall percentage achievement for a service based upon the score for all the individual components taken together, this is of limited value and could be misinterpreted. For example, a high overall score might mask little or no progress in one of the components. Thus the individual component scores, not the overall achievement score, should be used for service review. The use of web or radar plots can be used to emphasise individual components graphically (Fig. 2). This will highlight the problem areas in a particular primary dental care service, enable them to be arranged in priority order and tackled as resources allow. It then provides a straightforward way for services to measure progress and to identify future improvements required within the governance system. It additionally ensures that achievements are systematically reviewed and maintained.

Figure 2
figure 2

Radar plot of assessment

In using the model, and particularly the assessment tool, it is important to recognise that clinical governance is a concept and is not about 'ticking boxes'. However, the model provides a method to support the development of clinical governance by setting out the components a service should have in place to assure the quality of the clinical care that it provides. Simply put the model meets the requirements of clinical governance in a demonstrable way. In addition using the RAID system ensures that this is an active process involving staff and users of health services in improving clinical quality. The model is not pedantic and can be used/adapted as services change.

Furthermore, using an agreed model within and across services allows for the establishment of a structured peer review process involving multiple services. At its least, this is a powerful argument for self-regulation within an organisation, and there is a degree of protection by using such a recognised system. Much more than this, however, is the potential that such a process has for identifying the best examples of high quality service provision and promoting them. Already this has been recognised within services involved in the development work.

Clearly using this model and approach has resource implications for an organisation. These have not been quantified because the resources required will vary greatly depending upon the size and complexity of the service. Indeed part of the model requires that an organisation identifies for itself the resource required for the maintenance of such a structured approach. It should not be forgotten, however, that in the long term the better prepared an organisation is for assessments by bodies such as CHI the less work will be needed to achieve a high standard or meet the subsequent action plan. In addition, and arguably of greater importance, the overt and hidden costs of not getting clinical care right first time can be substantial.

Another criticism that could be levied concerns the plethora of assessments being carried out within the NHS now and the need for an additional set of standards and verification or audit process. Although this model does not cover every aspect of all the centrally-driven NHS audit, inspection and assessment processes in place at present, it provides a sound basis for a clinical quality assurance programme. Thus it provides for the busy clinician faced with a bewildering array of initiatives a single unified approach bringing together this multiplicity of assessment processes. The use of the model ensures that local solutions and approaches to clinical quality assurance are sought by a partnership involving all stakeholders in a service and are not driven by remote, imposed and, on occasions, irrelevant external sources. Crucially, therefore, it shifts the balance from centrally- driven externally imposed assessment and inspection back to emphasising professional responsibility for clinical quality assurance, self assessment and peer review. At the same time, the methodology is robust, rigorous and open to external scrutiny. By employing a bespoke model (although one that can trace its source to respected and validated external standards), the process followed is sensitive to change and is 'close to the coal face'.

Because the model and the methodology described are firmly based upon a quality assurance approach to clinical governance a service is guided towards three fundamentals:

  • Customer or patient focus

  • Measured assessment and progress review

  • Continuous improvement

A service using these principles should be confident that it is achieving the objectives of delivering safe practice, a patient centred service, is achieving continuous improvement, through a Quality Management System approach.

Of particular value for services is the provision of opportunities for the sharing of ideas and the dissemination of good practice. Inevitably, individual practices and services will be at different stages along the way to achieving comprehensive clinical quality assurance. This process provides a method by which serious deficiencies, should they exist, can be identified and quickly rectified. Of equal and more positive importance, quality improvements can be achieved making the best use of what are valuable and inevitably limited resources.

To support this, it is proposed that a mechanism available to all should be developed allowing access to identified examples of good practice related to particular components of the model. There is obviously an important role for primary care trusts to ensure that all practitioners who wish to are enabled to pursue this approach.

Finally, it is important that a process of audit and review is maintained for the model itself and associated methodology in order to allow it to adapt to changes in health services. If it becomes calcified and does not allow an iterative approach it will cease to be of use.

Although the concept of clinical governance was introduced into the general dental service in 2001/02, there has been no guidance about how individual practices should prioritise the area(s) on which they need to concentrate. This model provides practitioners with a scoring system that quickly identifies components that require attention. With the possible exception of Component 14, the Annual Report, the model is applicable to the general dental service and the authors are actively seeking general dental practitioners who wish to take it up.

Conclusions

Throughout, the authors have been conscious of the requirement to 'develop a methodology that all dentists and other clinicians can use (share good practice)', and it is their contention that the model and assessment process can be applied in any area of healthcare. Although they were developed for a salaried primary care dental service, the model, with minor adaptations, can be applied to any primary care service.

If structured properly and supported, it can also be a more frequent driver for continuous improvement within a service than that offered by CHI, which expects to assess every trust on a four year rolling programme. Clearly a great deal can happen to an organisation in 4 years, of both a positive and a negative nature. By combining the process of internal continuous improvement, yearly assessment and validation described in this paper with the four-yearly 'external' assessment from a single NHS assessment body, true clinical quality assurance can be achieved within the NHS.