Artificial Intelligence (AI) is now a reality that can no longer be denied, all people have heard of it, some (mostly developers) know the subject in depth, and few patients are currently benefited from this technology in clinical practice. Where do doctors, especially urologists, fit into this equation? And how Is it possible to implement the technology and consolidate the relationship among the developers, users (Doctors), and beneficiaries (patients)?

In this issue of Prostate Cancer and Prostatic Diseases, the review by Baydoun et al. [1] offers us a comprehensive overview of Artificial Intelligence Applications in Prostate Cancer. In the paper, various applications of AI in Prostate Cancer are evaluated: improving the accuracy and efficiency of histopathology assessment and diagnostic imaging interpretation, risk stratification (i.e., prognostication), and prediction of therapeutic benefit for personalized treatment recommendations. While many studies remain within the pre-clinical space or lack validation, we have witnessed the emergence of robust AI-based biomarkers validated on thousands of patients, and the prospective deployment of clinically-integrated workflows for automated radiation therapy design. However, multi-institutional and multi-disciplinary collaborations are needed in order to prospectively implement interoperable and accountable AI technology routinely in our clinical practice. Despite a high number of papers that utilize AI technology in Urology has been highlighted in this review and in many other previous works [2,3,4] there is still a lack of quality data that can suggest a systematic applicability of the proposed models. In fact, these are mostly work based on retrospective cohorts: there is a lack of extensive external validations and above all solid prospective studies and RCTs on the clinical value of these models. This makes AI a potentially powerful tool but not yet exploitable in clinical practice.

In order to facilitate the adoption of AI technology, as well as to protect the rights of all parties, the world is now working to propose specific regulations on AI. In April 2021, the European Commission announced the AI Act [5], which is the first legal framework to address risks of specific uses of AI. Specifically, the legal framework will classify AI systems into 4 different risk levels: unacceptable, high, limited, and minimal risk. While unacceptable risk systems will be considered a threat to people and will be banned, the other AI systems will have to comply with specific requirements depending on their risk level, e.g., limited risk systems will have to fulfill minimal transparency requirements while high-risk systems will have to satisfy stricter requirements and will have to be assessed before being put in the market. With such regulations, we hope the scientific community will be more willing to endorse and adopt this technology in the specific field of urology.

AI, as well as machines and technology based on an input-output system, faces some limitations. First, the machine itself would have difficulty alone in communicating clinical information to patients and it is not possible for the machine to perform an ethical evaluation. In this regard, it is important to highlight that biomedical AI systems could be created as Clinical Decision Support Systems (CDSS) thus aiming at supporting clinicians in the decision process rather than replacing them [6]. Second, AI has a limited ability to recognize bias and therefore the problem of liability. To face this challenge, a clinician-developer cooperation is of key importance. Indeed, the involvement of clinicians during the system design and development could ensure that the correct steps to quantify bias are considered before deploying models, e.g., taking into account under-represented patient groups. It is therefore essential to build a juridic and scientific regulatory framework. The parties involved are not only urologists as users of AI, but also developers and patients who are the direct beneficiaries. Let’s take an example of an AI model that shows an excellent diagnostic performance (supported by rigorous scientific validations), even superior to the “human judgement”, which however causes damage due to an error or an unpredictable patient variable: who should be responsible for this damage, the developer of the technology or the doctor who used it? It is arguable that the patient has no rule in this logic passage, but would patients be willing to disclaim the potential benefits deriving from the use of AI in the field of medicine?

There is a call for a de facto alliance, which shares objectives - namely the improvement of care - and the potential risks involved: an AI trifecta, developers-doctors-patients. The role of the developer could be to minimize the risk of AI error, by demonstrating the generalizability of the AI predictions on large prospective cohorts, and to develop increasingly reliable tools. In this regard, in the last decade, researchers developed several explainable AI (XAI) methods to improve the understanding of AI models consequently enhancing the trustworthiness of clinicians by transforming the so-called AI black box into a transparent box [7]. XAI methods allow for explaining the role of the variables involved in the AI model, globally, showing how an AI signature changes when increasing or decreasing the value of a specific variable, and locally, explaining why a specific output was given. If on one hand, this could help us understand the why of a certain output, on the other hand, it is also possible to suggest to clinicians when to trust it, by providing the level of confidence with which the AI model gives that specific output. With such maneuvers, it will be much easier for doctors to counsel patients, and for patients to understand the technology, to anticipate a benefit of applying AI to his/her treatment path, as well as to accept the potential risks in the event of a machine error.

AI is making giant strides in the field of Urology. The urology community has been the forerunner of numerous new technologies: think of robotics, endoscopy, laser applications, highlighting a tendency of this surgery branch to experiment and innovate. As a urologist, we should take an extra step to embrace AI technology - it is an inevitable trend will become reality in the very near future.

In conclusion, there is an absolute need for AI regulations in the field of medicine in general, both in clinical and scientific practice. The role of the AI trifecta - as defined as the alliance between developers, doctors, and patients - would be to define the spaces, times, and ways of AI in medicine. In this context, the branch of Urology has the chance to lead the way, perpetuating its propensity for innovation, technology, and research.