An intelligent methodology for the use of multi-criteria decision analysis in impact assessment: the case of real-world offshore construction

Impact assessment of large-scale projects involves a plethora of technical, economic, social, and environmental factors that must be assessed along with the expectations of the stakeholders of each project. While impact assessment is required for a development project to receive regulatory approval to proceed, it is also an invaluable tool during the design phase of complex projects, providing for informed decision-making. Molding multiple perspectives of diverse stakeholders into a single collective choice is a key challenge in the process. Multi-Criteria Decision Analysis (MCDA) is the methodology used to rank a finite number of decision options based on a finite set of evaluation criteria. Different MCDA techniques, however, may lead to different decisions when applied to the same problem while different sets of criteria and weights may rank choices differently even when the same method is applied. This is a cause of concern, and even acrimony, amongst the stakeholders, often leading to protracted periods of negotiation and delaying project launching. The objective of this paper is to present an intelligent system to ameliorate the effects of the inherent subjectivity in MCDA techniques and to develop a consensus amongst the stakeholders in a data-driven setting. A case study from the field of offshore construction is used as a running example. This case study, informed by real-world experience in the field, demonstrates succinctly the issues involved and illustrates clearly the proposed intelligent methodology and its merits.

www.nature.com/scientificreports/ One way to address the issue of subjectivity in MCDA methods is to use more robust hybrid models that combine two or more techniques to address decision-making problems. The expectation from a hybrid approach is that it will combine the advantages of each MCDA method while overcoming the drawbacks of each method applied alone. Hybrid MCDA methods can also effectively support the structuring of decision making on complex policy issues with fuzzy data and simultaneous use of quantitative and qualitative variables 21,22 .
Coupling MCDA techniques is often done within the framework of designing intelligent Decision Support Systems (IDSS). Such systems have been shown to have considerable success in addressing a wide range of complex real-world problems, at the expense of course of the level of transparency apparent to external stakeholders. In this context, the objective of this paper is threefold: • To elucidate the view from the field on the use of MCDA techniques in offshore construction; • To detail some of the most persistent practical issues in the use of existing MDCA methods; and • To present an IDSS employing existing MCDA tools for the use of firms involved in the offshore construction of marine installations.
While the case study is informed by real-world experience in the field of offshore construction, the novel IDSS methodology-which is the key contribution of this paper-is applicable across a wide range of development projects facing similar issues.
This paper is organized as follows. In "MCDA-the view from the field" Section, a concise overview of MCDA techniques in offshore construction practice is presented. The overview is based primarily upon the experiences and views of the second author, who is the General Director of Archirodon Group NV-one of the top marine contractors internationally with 60 years of experience in offshore construction. In "Offshore wind farm installation-a case study" Section, a case study from the literature is employed to demonstrate practical issues with MCDA techniques such as the infamous rank-reversal and how they may affect stakeholder perceptions. In "The rank reversal conundrum and identifying the top choices" Section, an intelligent DSS is presented that couples a traditional MCDA approach with fuzzy sets theory to ameliorate the issues identified in the previous section. Finally, in "Fuzzy logic and criteria clustering" Section, the conclusions of this paper and some directions for future research are presented in summary form.

MCDA-the view from the field
The appeal of MCDA in many real-world applications is due to its capacity to simplify complex situations characterized by multiple (and possibly conflicting) objectives and criteria, and to rationalize the decision process. The common schema of MCDA typically involves construction of a performance matrix, with each row representing a specific decision option and each column assessing the performance of that option against each of the criteria set. MCDA involves two critical choices: (i) the selection of criteria that capture the most important parameters, constraints and expected impacts of a project; and (ii) the weighting of the criteria to reflect their relative importance 23,24 .
In real-world situations, such as offshore construction, where the selection of the criteria is not always obvious, and the data is often fuzzy, significant human resources are devoted to the structuring of the problem. Identifying and selecting the individuals that will be involved in the analysis is a process critical for success, yet rife with technical, political, and human relations issues. The project contractor is typically responsible for appointing -after consultations-three teams to be involved in the application of MCDA: • The negotiation team, with members chosen among the project's stakeholders and whose preferences and ratings will inform the structure and entries of the MCDA performance matrix; • The technical team responsible for supporting the judgement team group that includes members proficient in the mathematics of MCDA and the relevant software implementations as well as experts responsible for providing additional data to the negotiation group as needed; and • The mediation team with managerial and legal expertise to safeguard the fairness of the process and resolve arguments.
The negotiation team, with the tacit support of the technical team, proceeds sequentially to establish: (i) the list of potential decisions or solutions to be examined in the analysis; (ii) the criteria to be used by integrating all the points of view expressed; (iii) the relative importance of each criterion; (iv) the rating of each solution when judged against each criterion; and (v) the aggregate judgements using an agreed upon MCDA technique. The mediation team makes sure that each step of the process unfolds within a framework established a priori, with rules agreed by all, so that the process will result in decisions with the broadest possible acceptance.
In practice, the process outlined previously rarely concludes in one round. Typically, there are several iterations that may modify the definition of the problem, the criteria used, and the assessments made. Revisiting the criteria and rating their importance is a useful negotiation tool for debates among the contractor and the stakeholders. These iterations test the boundaries of the decision (and may even serve as a de-facto sensitivity analysis) until the proposed solution meets with general acceptance.
What is rarely appreciated in theoretical MCDA is the fact that the analysis (with or without iterations) takes considerable time. In offshore construction projects, for instance, it usually lasts several months. The distinct danger in such time spans is that some of the fundamental economic, social, or political dimensions may indeed change due to external factors, lengthening even further the decision horizon. In the experience of major international offshore contractors such as Archirodon, slow decision making -and the resultant design changes-is the www.nature.com/scientificreports/ top factor for cost overruns. The Archirodon experience, is not unique; a comprehensive analysis of risk factors facing construction management firms cites the lack of robust risk management practices as a distinct threat to profitability, project performance, and customer satisfaction 25 .
In such complex situations where there is a need to reach a timely decision and time is of essence, the MCDA methodology should be as simple as possible, and the dimensions of the performance matrix kept to a minimum. That is, the choices compared should be as few as it is realistic, while at the same time the criteria used should be few and easily understood by the stakeholders. Furthermore, experienced contractors make sure that there is real participation and deliberation in the application of MCDA to reduce unnecessary iterations. In this context, participation extends beyond information dissemination to include active engagement and exchange of ideas. Deliberation involves fair and inclusive dialogue between participants able to debate and contribute to the methodology.
MCDA presents a shared framework and a common language to develop data-driven solutions for complex offshore installations but is particularly sensitive to subjective biases and data asymmetry. Participation and deliberation are essential for a timely decision, yet they are accompanied by problems of their own. Practical difficulties arise when: • The stakeholders do not have basic skills in mathematical concepts and data aggregation methodologies to appreciate the nuances in MCDA; or • The stakeholders do have the skills to understand the subjectivity inherent in MCDA, leading to fears that the manipulation of criteria and weights may privilege certain choices over others.
In the sequence, a running case study is employed to highlight some of the issues involved. To avoid using proprietary information from Archirodon, and to alleviate possible concerns about a conveniently designed example, the case study is based on publicly available data for a specific problem of designing an offshore solar farm installation 26 . All data generated or analyzed during this study are included in the body of this paper.

Offshore wind farm installation-a case study
The case study involves the problem of site selection of an offshore solar farm deployment in the Aegean Sea, Greece 26 . There were nine candidate locations (MA1 ÷ MA9) and seven assessment criteria (AC1 ÷ AC7) as outlined in Table 1. The nine locations have been chosen from a larger pool of choices after the application of exclusion criteria and the removal of unsuitable areas. The assessment criteria were identified through a literature review of renewable energy sources to include water depth (AC1), distance from shore (AC2), main voltage at a maximum distance of 100 km from the site area (AC3), distance from ports (AC4), serving population (AC5), solar radiation (AC6), and installation site area (AC7). AC1, AC2 and AC4 have negative polarity (smaller is better) while AC3, AC5, AC6, and AC7 have positive polarity (larger is better).
Since the criteria are expressed in truly diverse scales and units, it is customary to proceed with normalization, to make all the indicators comparable on the same scale, and aggregation, to combine the normalized indicators in an overall score/index. For MDCA input data, there are varied techniques of normalization (ordinal, linear scale, ratio scale, sigmoid etc.) and aggregation (additive, geometric, harmonic, minimum, median etc.). The actual combination of normalization and aggregation method used influences the outcome of MCDA.
For the present case study, the web-based MCDA Index Tool (www. mcdai ndex. net) is used to further analysis. The MCDA Index Tool provides for all combinations of 8 normalization methods (rank, percentile rank, standardized, minmax, target, logistic, 3-tier categorical, 5-tier categorical) and 5 aggregation methods (additive, geometric, harmonic, minimum, median). Since not all normalization methods are compatible with all aggregation methods, there are 31 feasible combinations of normalization/aggregation 27 .
Processing the input data of the case study with the MCDA Index tool for 31 distinct combinations of normalization and aggregation methods with equal weights leads to the results tabulated in Fig. 1. The color coding in the figure shows the strength of the ranking obtained by each alternative location. For instance, location MA9 www.nature.com/scientificreports/ is top-ranked in 87%, third-ranked in 3% and fourth-ranked in 10% of the 31 normalization/aggregation pairs examined. Figure 1 illustrates the alternative rankings that can be obtained for different pairs. Figure 2 presents the comparison of rankings obtained by each location over all normalization/aggregation combinations. From these comparisons, it is evident that locations (MA4 ÷ MA7) never achieve a rank higher than four. This is the kind of observation that leads the negotiation team to consider dropping from the next iteration the locations that appear to not have a chance to rank at the top. It is a tempting consideration, as it will facilitate the deliberations by focusing on fewer solutions and thus will reduce the time needed to reach a final decision.
If the negotiation team were to succumb to the temptation and re-structure the problem with only the five choices (MA1, MA2, MA3, MA8 and MA9) the results produced with the MCDA Index Tool will appear as in Figs. 3 and 4. The rankings have transformed dramatically and in fact location MA1 might be preferable over MA9 while the uncertainty of the choice has also increased significantly. This is the dreaded Rank Reversal (RR) paradox that plagues most MCDA techniques and presents a unique challenge to real-world problems 28 . Due to the RR paradox, the results could be different depending on included alternatives. For instance, adopting a strategy where in each step the alternative(s) with no first ranks are dropped  www.nature.com/scientificreports/ and the process is repeated for the remaining ones. The results of this -admittedly arbitrary-elimination strategy are highlighted in Fig. 5. Clearly, the order of choice shifts from MA9 to MA1 and returns to the original state only when the two alternatives compete head-on.
There is a school of thought in MCDA that recognizes that the head-on comparison of MA9 and MA1 is a more reliable indicator of the preferred solution and proposes to compare all the alternatives directly one to one 29 . The Condorcet method, with origins in social choice theory, purportedly prevents distortions by making the relative position of two alternatives independent of their positions relative to any other 30 . For the example at hand, comparing the 9 alternatives head-on requires the MCDA analysis of 36 pairs. Table 2 summarizes the results obtained through this approach with the value of each cell indicating the ratio of the 1 st ranks achieved by the column-alternative to the 1 st ranks achieved by the corresponding row-alternative. If the number is more than 1, the column-alternative wins over the row-alternative; if it is less than 1, the row-alternative wins; and if it is exactly 1 there is a tie.
The Condorcet method appears to restore the approximate order obtained through the MCDA method of all the 9 alternatives examined together in Fig. 1, there are still interesting rank reversals. For instance, it restores MA8 as a contender, while this choice disappeared early in the elimination strategy.

The Rank reversal conundrum and identifying the top choices
RR is a paradox because the rank order of alternatives can be changed when a current choice is eliminated from the set of alternatives or a new one is added. RR is a challenge because it undermines the credibility of ratings and rankings of MCDA and enhances the suspicions of stakeholders that rankings can be "manipulated" to advance pre-determined agendas.    www.nature.com/scientificreports/ Several different MCDA techniques have been proposed in recent years claiming to be rank-reversal-free 31 . The experience from the field though is that while these methods may successfully overcome RR problems, they are not completely reversal free. The most promising among them is the low-complexity Stable Preference Ordering Towards Ideal Solution (SPOTIS) approach 32 .
SPOTIS is based on the classical MCDM structure but requires additional information on the min and max bounds of score values for each criterion. These bounds along with the polarity of each criterion define the ideal best solution. For the offshore wind farm installation case study of the previous section, the ideal solution point is computed in Table 3. The SPOTIS method proceeds to compute the closeness of each alternative to the ideal point solution by utilizing a simple distance metric (E1) and normalizing it with respect to the distance between the min and max values for each criterion. This leads to a unitless average distance of each alternative from the multi-criteria ideal one 32 . Table 4 summarizes the average distances computed for the nine alternatives of the offshore wind farm installation and the resultant ranking. MA9 emerges clearly as the preferred solution, with MA8 and MA1 practically tied for second place.
It is safe to assume that the negotiation team might consider dropping MA6 from the next iteration. Since the removal of MA6 does not change the bounds of the criteria (and this is indeed the case here) the average distances of the remaining alternatives remain the same and there is no rank reversal. If on the other hand, MA4 ÷ MA7 are removed as locations that appear to not have a chance to rank at the top the bounds, the ideal solution, and the average distances of the remaining five alternatives do change yet there is no rank reversal (Table 5).   www.nature.com/scientificreports/ While SPOTIS appears to be rank-reversal free for the case study at hand, it is not certain that the rankings it generates are superior to the MCDA techniques. Table 6 summarizes the alternative rankings computed to this point. It is apparent that while MA9, MA1 and MA8 are the top three contenders, with MA2 and MA3 following closely, the relative ranking differs depending upon the method used.
Clearly, there could have been many more MCDA techniques used since no one method can be considered the most appropriate for all situations. Multiple attempts in the literature to compare or benchmark methods against each other failed to produce results reproducible across a wide range of paradigms 33 . Beyond the fundamental technical aspects of each method, the use of MCDA requires a strong "craft" element 34 . Practitioners should be cognizant of the requirements, limitations, and peculiarities of each method in their field to use them effectively as well as of the fundamental observation that using a particular MCDA method can and does significantly influence the outcome.
In offshore construction, the combination of classical and Condorcet MCDA along with SPOTIS has been proven to be sufficient for the recognition of the frontrunners between the various alternative decision choices. (In the example considered, the 31 distinct combinations of normalization and aggregation methods assessed through the MCDA Index tool, the 36 one-on-one comparisons of the 9 alternative choices of the MCDA Condorcet method, and the results of the robust to rank-reversal technique of SPOTIS provide a sufficiently rich milieu to recognize the top choices.) Identifying the frontrunners is essential for the second round of the planning process, where significant time, effort and funding will be spent on detailing the distinct characteristics of each alternative. Eliminating candidates during the first round is often a contentious issue with the stakeholders and the process should be such that it can withstand scrutiny. In real-world offshore construction, this often achieved with the use of an expert system. Figure 6 illustrates succinctly the design of such an expert system that utilizes the rankings obtained via MCDA, MCDA Condorcet, SPOTIS (and, if there are many choices, MCDA top 5 and SPOTIS top 5) to pick the top alternatives.
The expert system is based on a knowledge platform that incorporates the expert knowledge and experience of the contractor in the field of the offshore construction. The knowledge base is continuously updated through a learning module, as new projects are added to the portfolio of the company and ongoing and completed projects are reviewed for a posteriori assessment of the choices made. The simple user interface requires only the input of the rankings of the various choices that emerged through the MCDA techniques applied. The inference engine operates on a set of relatively simple, yet proprietary, rules of the if-then type rather than through conventional procedural code.
The entire approach is based on fundamentally deterministic criteria, and this is a distinct limitation. (Standard sensitivity analysis can be used though to examine the extent to which changes in the weights and scores of the criteria influence the robustness of the rankings obtained through each technique). In offshore construction, real world criteria weights and scores are often assessed based on multiple conflicting information sources. To address such cases another approach is used based on fuzzy logic.

Fuzzy logic and criteria clustering
The Characteristic Object METhod (COMET) has been proposed recently to address MCDA problems with fuzziness in the criteria. COMET achieved prominence because it has been proven to be immune to the RR paradox. This property is interesting, although it is unfair to compare it with classical MCDA methods as it requires additional information in the structuring of the decision problem 35,36 . COMET has been recognized in the offshore construction industry for: (i) its incorporation of fuzziness in the criteria; and (ii) its intuitive methodology for hierarchical clustering of the criteria. Each of these issues is addressed in turn. Expert knowledge on the significance level of each of the criteria is used to convert its range of values to a triangular fuzzy number (m 1 , m 2 , m 3 ) where m 1 represents the smallest likely value, m 2 the most probable value, and m 3 the largest possible value of the fuzzy event. Table 7 indicates these characteristics values for the criteria of the running offshore wind farm installation case study. For criteria AC1, AC2, and AC4 the values of which are practically binary, only the two extremes are represented. The COMET method then proceeds by requiring an expert panel to score in terms of preference all pairwise combinations to create a rule base.
For the simplicity of the presentation, the assumption momentarily is of having just two criteria, say AC3 and AC6. Each combination of a distinct value of AC3 with a distinct value of AC6 is called a "characteristic object", akin to a state vector in the (AC3, AC6) two-dimensional space. This in turn requires the expert valuation of whether, for instance, the combination (150 kV, 1600 kWh/m 2 ) is preferred over (66 kV, 1700 kWh/m 2 ). Both criteria are of the benefit type, hence higher values are preferable. If the expert panel consistently prefers the bigger incremental increase in AC3 over the less impressive step up in AC6, then the scoring of the 9 possible characteristic objects (CO) leads to the rule base in Table 8 and the triangular fuzzy numbers in Fig. 7. The rule base in Table 8 not only ranks the 9 possible pairs but also assigns a corresponding preference score between 0.0 (least desirable) and 1.0 (most desirable). The preference score can then be used to rank the alternative choices    www.nature.com/scientificreports/ MA1 ÷ MA9 as in Table 9. MA8 and MA9 tie in first place and MA1, MA2 in second place while MA6 and MA7 are the least appealing alternatives. The COMET technique is easy to implement and the online tool DSS COMET (www. comet. edu. pl) can automate the process. But its applicability is constrained by the fact that it is not practical for an expert panel to examine more than 2-3 criteria at a time. For instance, in the offshore wind farm installation case study there are three criteria with two options (AC1, AC2, AC4) and four criteria with three options (AC3, AC5, AC6, AC7). Processing the full problem with COMET will lead to N = 3 4 • 2 3 = 648 characteristic objects, requiring ½N(N−1) = 209,628 pairwise comparisons. Above and beyond that fact that so many comparisons are exhausting for the expert panel, the human brain cannot make inferences with more than 3-4 items stored in the working memory. (Working memory is the active version of short-term memory related to the temporary storage and manipulation of information and its limited capacity is a central bottleneck of human cognition 37,38 ).
This apparent "curse of dimensionality" has led the authors of the COMET technique to propose a practical alternative by decomposing the problem into smaller ones through clustering of similar criteria 39 . Creating a structure of decisional models interconnected with each other significantly reduces the number of pairwise comparisons needed as well as the cognitive load on the expert panel.
In the running case study, it is plausible to group criteria AC1, AC2 and AC4 under a "marine" banner; AC3 and AC6 under an "energy" banner; and AC3 and AC7 under a "comfort" banner. Figure 8 depicts the hierarchical structure of decomposing the full problem into three smaller ones.
A separate score is computed for each of the three sub-problems and a final composite score is produced as the product of the scores every alternative receives from each sub-problem. This modified COMET approach leads to 3 4 + 2 3 = 89 characteristic objects and a total set of 324 + 28 = 352 pairwise comparisons. (Admittedly, these savings are achieved by not solving the complete problem and hence immunity to RR is no longer guaranteed.) The "energy" criteria were already examined in Table 8 and Fig. 7 and the results in Table 9 are incorporated in column P E of Table 10. For brevity, the results for the "marine" criteria and "comfort" criteria are also summarized in columns P M and P C respectively of Table 10.
In the interest of reproducibility of the results, the expert scoring of the COs for the values of AC1, AC2, AC4 favors water depth over distance from shore or from port. Similarly, the expert scoring of the COs for the values of AC3, AC6 and AC5, AC7 favors the pairs that exhibit higher values by a bigger margin. (For example, the AC3/AC6 pair 400/1700 is preferred over the 66/1800 one.) In the interest of brevity, Table 10 summarizes directly the scores achieved by each alternative in the evaluation of each sub-problem as well as the composite score and ranking.
From the results of the various methods in Table 6, the option MA9 is the top choice, followed by MA1 and MA8. But the SPOTIS results in Table 10 indicate that MA1 is the top choice, followed by MA9 and MA8, with MA9 holding a distinct 22% advantage over MA1. An additional advantage of the COMET approach is that allows for straightforward assessment of the values of the criteria that may shift the rank position of a choice 36 .

Intelligent MCDA methodology
It should be apparent at this point that MCDA can assist individuals and organizations to make better decisions. But the outcome cannot be the automatic result of an MCDA algorithm; it should always be a decision made by the stakeholders after an exhaustive review of all the data at hand. In the case of offshore construction, the complexity of the tasks involved and the need to engage diverse groups of experts and stakeholders in the process makes matters more difficult.
In offshore construction where time is of essence, experienced contractors make sure that there is real participation and deliberation in the application of MCDA to reduce unnecessary iterations. Yet, iterations are necessary to reach an acceptable and satisfactory outcome. Figure 9 captures the proposed intelligent framework base on the mathematical foundations of MCDA and the real-world experience in offshore construction. The process starts with the phase of scoping the problem, identifying a basic set of choices, and establishing the criteria for the decision-making process. This phase involves mixed teams from the contractor and the stakeholders and, possibly, outside experts. Once the initial scope of the problem is set, the contractor team performs an in-house COMET analysis to comprehend better the characteristics of the choices involved, to identify potential rankings and to develop a sense of the impact of the various criteria.
The third phase involves the use of the expert system defined in Fig. 7, with mixed teams for the contractor and the stakeholders. The desired outcome of this phase is a smaller set of alternatives and fewer criteria -but it is quite possible that the process might need to re-start with new alternatives added to the mix as the continuous exposure of the stakeholders to the issues involved might change their view on the scope of the problem.
If a smaller, more realistic set of choices emerges from the expert system phase the contractor team performs another, more focused, COMET analysis in-house to inform the final decision phase. In this final phase,

Conclusions
Impact assessment, the evidence-based prospective impact analysis part of the planning stage of any development project, is subject to regulatory oversight providing for public engagement, reconciliation, and partnership in the public interest during the design phase. Indeed, impact assessment of large-scale projects involves a plethora of technical, economic, social, and environmental factors that must be assessed along with the expectations of the stakeholders of each project.
Molding the multiple perspectives of diverse stakeholders into a single collective choice is a key challenge in impact assessment and MCDA is the de facto methodology used to rank decision options based on a predetermined set of evaluation criteria. Different MCDA techniques, however, may lead to different decisions when applied to the same problem, while different sets of criteria and weights may rank choices differently even when the same method is applied. This is a cause of concern, and even acrimony, amongst the stakeholders, often leading to protracted periods to protracted negotiations and delaying construction.
The objective of this paper was to ameliorate the effects of the inherent subjectivity in MCDA techniques and to develop a consensus amongst the stakeholders in a data-driven setting. This was accomplished not by devising a new MCDA technique but, rather, through a novel IDSS employing existing methods from the MCDA toolbox and implemented via web-based software. The design of the system is informed both by theoretical MCDA (and COMET in particular) and by field experience.
While the intelligent methodology presented in this paper has been detailed through the running example of a case study from offshore construction, the proposed approach is directly applicable to all large-scale projects requiring impact assessment throughout their design phase. Indeed, real-world offshore construction is representative of the field of large-scale projects where a plethora of technical, economic, social, and environmental factors collude to create a morass of complex issues and expectations that are difficult to assess in a uniform canvas.
It would be natural at this point to offer to fully automate the process as an extension of current research. It is the strong conviction of the authors, though, that a desirable outcome cannot be the product of an automated process. The criteria used as well as their respective weights are products of expert opinion and cannot be fully captured by an expert system. The deliberative nature of the proposed framework, while cumbersome, is essential to form consensus especially when the technical, economic, and regulatory issues involved created an often-fuzzy decision tableau.

Data availability
The original case study data have been published in 26 , publicly available at https:// www. mdpi. com/ 2077-1312/ 10/2/ 224. All other data generated or analyzed during this study are included in the present manuscript.