Kempt H, Nagel S K. Responsibility, second opinions and peer disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts. J Med Ethics 2022; 48: 222-229.

A rule for the use of AI is proposed.

Dentistry can be a lonely business. One only has to wait a short time on an online dental forum for a dentist to post radiographs and/or photographs, seeking a second opinion (and usually receiving multiple differing opinions!) Cooperation between colleagues and seeking second opinions have been at the core of health care practice for decades and rightly so. What then are the ethical considerations of using artificial intelligence decision support systems (AI-DSS) to provide that second opinion?

AI, whilst not yet perfect, is rapidly achieving a diagnostic accuracy which may eventually surpass that of human medical experts. When that happens, if the input of AI is rejected then there is an acceptance that worse diagnostic outcomes are acceptable. The medical professional in charge of the patient's care should, however, always have the ultimate legal responsibility for the diagnosis and treatment. If a second opinion is needed in cases of doubt, then a peer colleague of equal standing can be asked for a second opinion. A reasoned discussion backed up by explanation can then be had, based on skills to assess evidence and form conclusions. Disagreements can be resolved by noting reasons for disagreement. In cases of unresolved difference, 'the epistemic justification of why the final diagnosis and the second opinion differ' and 'why one of them should be favoured' can be recorded.

What are the consequences if the second opinion is based on AI-DSS and does not confirm the initial diagnosis? An exchange of views is not possible, yet the 'opinion' cannot be ignored. The dataset on which AI-DSS works cannot be discussed nor tweaked by the lead physician until a 'sensible' result is obtained, since this risks leading to a self-confirming bias in the diagnosis. The physician-physician relationship is thus ethically different from the physician-machine relationship.

A rule of disagreement is proposed that states that if AI-DSS contradicts the initial diagnosis, this will be counted as disagreement and the second opinion of another physician will be required. When AI-DSS confirms the initial diagnosis, patient confidence may be increased and there may be a saving of resource, in that a second human opinion is not needed.

The dangers of over-reliance on, and misuse of, the technology of AI-DSS are discussed. The importance of a human being taking full responsibility for diagnosis and treatment is stressed. (The reliability of second opinions generated on social media remain outside of the scope of this paper.)