Abstract

Percutaneous radiofrequency ablation (RFA) is a minimally invasive technique that destroys cancer cells by heat. The heat results from focusing energy in the radiofrequency spectrum through a needle. Amongst others, this can enable the treatment of patients who are not eligible for an open surgery. However, the possibility of recurrent liver cancer due to incomplete ablation of the tumor makes post-interventional monitoring via regular follow-up scans mandatory. These scans have to be carefully inspected for any conspicuousness. Within this study, the RF ablation zones from twelve post-interventional CT acquisitions have been segmented semi-automatically to support the visual inspection. An interactive, graph-based contouring approach, which prefers spherically shaped regions, has been applied. For the quantitative and qualitative analysis of the algorithm’s results, manual slice-by-slice segmentations produced by clinical experts have been used as the gold standard (which have also been compared among each other). As evaluation metric for the statistical validation, the Dice Similarity Coefficient (DSC) has been calculated. The results show that the proposed tool provides lesion segmentation with sufficient accuracy much faster than manual segmentation. The visual feedback and interactivity make the proposed tool well suitable for the clinical workflow.

Introduction

Liver cancer is on the rise worldwide, mainly because of hepatitis infection and alcohol abuse. Especially patients with primary liver cancer (Hepatocellular Carcinomas, HCC) have a poor prognosis because, of its late symptomatic onset, resulting in a median survival time of four to six months from the time of diagnosis, when untreated. According to the recent treatment guidelines1, radiofrequency ablation (RFA) serves as a first line therapy approach for early HCC in patients with liver cirrhosis. Also, for metastatic liver disease, the local usage of ablation therapies increases. While the technique was originally developed for patients who were not eligible for surgery its use has now expanded to serve as a bridge to liver transplantation and even as an alternative to surgical resection in the early stages of the disease2. RFA was first described in the early 1990s, followed by huge technical advances throughout the last decades. The underlying principle is based on a high frequency alternating current, which is delivered through one or more electrodes in the treated lesion3 (see Fig. 1 for a schematic view of RFA needle placement in a liver tumor, including the surrounding necrotic zone, and Fig. 2 for a postinterventional computed tomography (CT) scan with ablation needle). Optimally, the heat destroys cancer cells by inducing a coagulative necrosis, with cellular proteins being denaturized. Most commonly, tissue necrosis already begins at approximately 60 °C, but usually, temperatures around 100° are needed to achieve satisfying results. The amount of destroyed tissue mostly depends on the individual impedance and placement of the needle. Furthermore, it is inversely proportional to the square of the distance from the electrode. As a result, tissue cools rapidly away from the tip of the needle probe. Hence, the proximity to large blood vessels also plays a major role in the heat transmission. Blood flow protects the vessel wall from damage, but, on the flipside, acts as a heat sink by cooling down nearby tissue limiting the methods overall success4. As a consequence, a significant mismatch between expected and truly induced lesion size and geometry has been observed in many radio-frequency ablations performed in the liver. It can lead to over-treatment with severe injuries (up to 9% major complications5), or under-treatment with tumor recurrence (up to 40%5).

Figure 1: Schematic view of the liver (brown) with a fully expanded and umbrella-shaped radiofrequency ablation (RFA) needle (black).
Figure 1

The needle tips are located in a liver tumor (red) surrounded by the so called necrotic zone (light brown).

Figure 2: Postinterventional computed tomography (CT) scan of a radiofrequency (RF) ablation with the ablation needle still in place: the upper right window shows an axial plane, the lower left window a sagittal plane and the lower right window a coronal plane.
Figure 2

The RFA needle is easily recognizable, because its umbrella-shaped characteristics show up very bright within the ablation zone inside the liver.

Tumor recurrence has a major limitation on the survival rates for all types of cancer therapies, i.e., resection and RFA. Cohort patient studies have shown the evidence of a significant reduction in the recurrence rate, if the RFA generates a safety rim around the tumor6. This elicits the need for a reliable method for the comparison of localization, size and geometry of the tumor in the preoperative images, on the one hand, and thermally induced lesion after ablation, on the other hand.

Tumor recurrence can be diagnosed by the detection of typical alteration in tissue enhancement. Nevertheless, increase of size or geometry of the lesion seems to be a much more sensitive indicator of early recurrence in follow up imaging. Therefore, a reliable and feasible determination of the ablation zone at baseline and follow-up may contribute to a positive outcome for the patient and can lead to a better understanding about the cause of new tumor growth. Consequently, the additional knowledge might lead to improvement of ablation protocols or even new treatment strategies.

Determination of therapeutically induced lesions after minimally invasive cancer treatment can be performed by segmentation. This is usually done by a time consuming manual procedure and, therefore, not yet part of the clinical routine. A validated segmentation algorithm may potentially increase the acceptance of the method in the medical community and, consecutively, lead to a significant benefit in patient treatment.

The segmentation field in computer vision deals with the computer-aided analysis and classification of (medical) image data in a broad range of applications, such as the automatic detection of humans in videos7 or the automatic volumetry of brain images of patients8. In general, the objective of a segmentation algorithm is to support and speed-up a time-consuming manual selection and contouring process. A number of algorithms have been proposed, i.e., Active Contours (ACM)9,10, Deformable Models11,12 Active Appearance Models (AAM)13,14, graph-based approaches15,16, fuzzy-based approaches17, or neural networks18, which are often based on a mathematical model from other disciplines, like Physics or Electrical Engineering, or present a combination of several mathematical models. Alternatively, the algorithms can be classified either as fully-automatic or semi-automatic. In the latter, the user is able to guide the segmentation algorithm to avoid an unsatisfying segmentation outcome. In contrast, fully-automatic approaches generally need a re-run after failure (e.g., with other parameter settings), which can be a very frustrating trial-and-error process for the end-user. An example of a fully-automatic liver tumor segmentation approach for abdominal computed tomography scans has been introduced by Abdel-massieh et al.19. The approach does not require any interaction from the user, but applies Gaussian smoothing and isodata thresholding to turn the gray value image into a binary representation, with tumors visible as black spots on a white background. Another fully-automatic liver tumor segmentation algorithm using Fuzzy Generalized C-Means (FGCM) and Possibilistic Generalized C-Means (PGCM) has been presented by Mandava et al.20. This kernel based clustering algorithm incorporates Tsallis entropy to resolve long-range interactions between tumor and healthy tissue intensities and uses the datasets from the MICCAI liver Tumor Segmentation Challenge 08 (LTS08) for evaluation.

As we propose a semi-automatic approach in the area of liver tumor treatment, we will introduce more background work within this field. A 3D fuzzy-based approach to liver tumor segmentation has been introduced by Badura and Pietka21. Their semi-automatic liver segmentation scheme consists of a seed point selection, three-dimensional anisotropic diffusion filtering, and adaptive region growing, supported by a fuzzy inference system. Zhang et al.22 have proposed an interactive tumor segmentation method for CT scans from the liver using Support Vector Classification (SVM) with Watershed. In summary, they partition the CT volume under a watershed transform after some pre-processing operations, followed by an SVM classifier trained on the user-selected seed points. Moreover, morphological operations are performed to refine the rough segmentation result of the SVM classification. A related SVM-based semi-automatic method for liver tumors from computed tomography angiography (CTA) images using voxel classification and affinity constraint propagation has been presented by Freiman et al.23. Their method employs user-defined seeds to classify the liver voxels into tumor and healthy tissue groups, supported by an SVM classification engine. In a following step, an energy function describing the propagation of these seeds is defined over the 3D image. Another semi-automatic segmentation approach is to use Bayesian rule-based 3D region growing, as has been proposed by Qi et al.24. They initialize a bag of Gaussians at manually selected seeds, which they iteratively update during the growing process. Additionally, morphological operations are performed to refine the result afterwards, and, finally, to obtain a binary segmentation, the related continuous segmentation map is thresholded. Closer related to our work is the work of Häme and Pollari25, who have presented a liver tumor segmentation method to reduce the manual labor and time required in the treatment planning of radiofrequency ablation. To achieve this, they introduced a semi-automatic liver tumor segmentation approach with a hidden Markov measure field model and a non-parametric distribution estimation.

Others working in the specific area of (semi-)automatic segmentation of ablation zones in RFA datasets of the liver are Passera et al.26. They claim to present the first attempt to obtain a quantitative tool aimed to assess the accuracy of RFA. For the segmentation, they use a Live-Wire algorithm27,28 – implemented within the MeVisLab platform (www.mevislab.de) – and clustering. However, they did not include RFA data in their study if the needle was still present within the scan. But this is the case when the interventionalist particularly wants to assess the size of the induced lesion under the assumption of continuation or additional ablation. And to avoid the repositioning or the replacement of the instrument, the RFA probe remains in the patient while performing the control scan. Additionally, the segmentation approach worked only in 2D, which can be very time-consuming in case of tumors or ablation zones that extend over many slices; the reported segmentation time was ten minutes. A separate radio frequency ablation registration, segmentation, and fusion tool called RFAST has been presented by McCreedy et al.29, who also apply a livewire-based method in single 2D slices. However, the segmentation process has not been described in detail and no quantitative segmentation results are presented. Keil et al.30 have presented their results of the semi-automated segmentation of liver metastases treated by radiofrequency ablation. The segmentation algorithm31 used in their study consists of six steps where a three-dimensional region growing and morphologic operations, like erosion and dilation are performed. Besides, the user needs to draw a diameter across the lesion, or, for smaller lesions, provide a single click inside the lesion as initialization. Weihusen et al.32 have introduced a workflow oriented software support for image guided RFA of focal liver malignancies, in which they also segment coagulation necrosis in the (post-interventional) control scans after the ablation. Thereto, the user has to provide a seed point inside the ablation zone which starts a morphology-based region growing algorithm proposed by Kuhnigk et al.33. Afterwards, the segmentation result can be corrected towards more “irregular” or “roundish” geometry by manual interaction. Bricault et al.34 have presented their preliminary results of a 3D shape-based analysis of CT Scans for the detection of local recurrence of liver metastases after RFA treatment. For that purpose, they applied a semi-automated 3D segmentation process that uses a “tagged” watershed algorithm. The semi-automated segmentation, on average, takes about 4 minutes and the minimum required user interaction includes two mouse clicks: one in the ablated tumor and one in the surrounding liver parenchyma. Another volumetric evaluation study of the variability of the size and the shape of necrosis induced by RFA in human livers has been presented by Stippel et al.35. The volumetric evaluation was performed with the software package VA 40C from Siemens on a Leonardo workstation. For the stepwise segmentation of the ablation-induced lesions, a region of interest had to be defined in each slice by manually tracking the approximate borders of the lesion. Afterwards, the precise border of the lesion was determined by a filter algorithm (provided by the software package), which is based on density differences between the ablated tissue and the liver tissue. As an enclosure to this section, we want to point the interested reader to an overview publication about computer-assisted planning, intervention and assessment of liver tumor ablation from Schumann et al.36. To the best of our knowledge, there is no work that has studied the semi-automatic 3D segmentation of post-interventional RF ablation zones, especially with clinical data that has still the ablation needles in place. Moreover, we are the first to provide a unique dataset collection from the clinical routine of post-interventional RFA cases (with and without ablation needles still in place) to the research community.

The contributions are organized as follows: At first, the Results section presents the outcomes of our experiments. Then the Discussion section concludes our contribution and outlines areas for future research. Finally, the Materials and Methods section presents the details of the proposed segmentation algorithm and online resources where anonymized (medical) data can be found.

Results

The proposed interactive segmentation algorithm has been implemented as a C++ module (Visual Studio 2008) for the medical prototyping platform MeVisLab (www.mevislab.de, Version 2.3). The computation of the segmentation result, including the graph construction from the current user-defined seed point position and the optimal min-cut calculation could be performed within one second (on a laptop with Intel Core i5–750 CPU, 4 × 2.66 GHz, 8 GB RAM, Windows 7 Professional x64 Version with Service Pack 1 installed). This enables real-time feedback of our algorithm presented to the user. This immediate response and feedback of the segmentation allows user guidance of the algorithm to a satisfying outcome.

Figure 3 presents a semi-automatic segmentation result of a post-interventional ablation zone for visual inspection. As the CT data has been acquired immediately after the treatment, the needle used for the ablation is still in place and therefore visible in the scan. The left image shows the axial slice with a user-defined seed point (blue) and the red dots are the border of the segmentation in this slice. The red dots represent the last nodes of the graph that are still bound to the source s after the min-cut. The image in the middle presents the segmentation result in 3D. Again, the red dots show the last nodes of the graph which are still connected to the source after applying the min-cut. Finally, the rightmost image visualizes a closed surface (green) that has been generated from the graph’s nodes representing the segmentation result. Afterwards, this closed surface is used to generate a solid mask for the calculation of the Dice Similarity Coefficient (DSC)37 with the pure manual slice-by-slice segmentations.

Figure 3: This image shows overall three screenshots of an interactive segmentation result for a postinterventional CT acquisition: a screenshot of an axial plane on the left side and two 3D screenshots on the next two images to the right.
Figure 3

The red dots display the segmentation result in the two images on the left, besides the axial plane contains the user-defined seed point (blue) where the interactive segmentation has been stopped. Finally, the rightmost screenshot includes a closed surface (green) of the interactive segmentation result of the ablation zone, which has been generated from the red dots shown in the middle screenshot. From the closed surface on the other hand, a solid mask can be generated, which is used to determine the Dice Similarity Coefficient (DSC) if compared with a pure manual slice-by-slice expert volumetry. Note: for the native scan please see Fig. 2.

Figure 4 displays a comparison of a manual (green) and an automatic (red) segmentation for visual inspection. The upper left window shows both segmentation results as 3D masks superimposed on the original dataset. The upper right, lower left and lower right windows present direct comparisons between the manual and the automatic segmentations on an axial, sagittal and coronal slice, respectively. The yellow cross points to the location of the manually placed seed point for the graph construction. The lower left window shows that the algorithm tends to an over segmentation compared to the manual counterpart. However, changing the contrast window clearly shows the reason: the algorithm adapts to the bright border around the ablation zone (Fig. 5).

Figure 4: This screenshots present a direct comparison between a pure manual segmentation (green) and a semi-automatic/interactive segmentation (red).
Figure 4

Therefore, the three-dimensional masks of both segmentations (manual/interactive) have been merged and placed within the original dataset at the location of the ablation zone (upper left window). Easily recognizable is the bright stick pointing to the masks, which is the shaft of the RFA needle. The remaining three windows show the planes where the user-defined seed point (yellow cross) has been placed for interactive segmentation result, with the axial plane in the upper right windows, the sagittal plane in the lower left window and the coronal plane in the lower right window. Note: for the native scan please see Fig. 2.

Figure 5: Extreme windowing setting for the acquisition and planes from Fig. 4, making a bright border around the ablation zone recognizable.
Figure 5

Note: for the native scan with an appropriate windowing please see Fig. 2.

Additionally, Table 1 presents the direct comparison of manual slice-by-slice segmentations from physician 1 and results of semi-automatic segmentations for twelve ablation zones using the Dice Similarity Coefficient. Table 2 presents the summary of results from Table 2 for min, max, mean μ and standard deviation σ for the twelve ablation zones. In the same way, Table 3 presents the direct comparison of manual slice-by-slice segmentations from physician 2 and semi-automatic segmentation results for the twelve ablation zones via the Dice Similarity Coefficient, and Table 4 presents the summary of results from Table 3 for min, max, mean μ and standard deviation σ. Furthermore, Table 5 presents the direct comparison of manual slice-by-slice segmentations from physician 1 and physician 2 for the twelve ablation zones via the Dice Similarity Coefficient, and Table 6 presents the summary of results from Table 5 for min, max, mean μ and standard deviation σ. Overall the results showed that the manual delineations (inter-observer DSC 88.8) of the lesions borders gave better results than the automatic method (DSC 77.0 – 77.1) – mean DSC values and segmented volumes of both readers were used for simplicity because individual values did not differ significantly between both readers. The differences when comparing inter-observer DSC to DSC between automatic and physician 1 (p-value  0.01) and DSC automatic and physician 2 (p-value  0.01) where both statistically significant. The results also showed that on average the automatically segmented lesion was smaller (33.03) when compared to physician 1 (35.85) or the physician 2 (36.18). However, this difference is not statistically significant (p-value 0.42 for physician 1) and (p-value 0.30 physician 2).

Table 1: Direct comparison of manual slice-by-slice segmentations from physician 1 and semi-automatic segmentation results for twelve ablation zones (AZ) via the Dice Similarity Coefficient (DSC).
Table 2: Summary of results from Table 1: min, max, mean μ and standard deviation σ for twelve ablation zones.
Table 3: Direct comparison of manual slice-by-slice segmentations from physician 2 and semi-automatic segmentation results for twelve ablation zones (AZ) via the Dice Similarity Coefficient (DSC).
Table 4: Summary of results from Table 3: min, max, mean μ and standard deviation σ for twelve ablation zones.
Table 5: Direct comparison of manual slice-by-slice segmentations from physician 1 and physician 2 for twelve ablation zones (AZ) via the Dice Similarity Coefficient (DSC).
Table 6: Summary of results from Table 5: min, max, mean μ and standard deviation σ for twelve ablation zones.

Moreover, Table 7 and Table 8 present the results from Table 1, but differentiate between the cases where the RF electrodes were still visible (Table 7) and the cases where the RF electrodes have already been removed (Table 8). As Table 7 and Table 8 show, there is no significant differences between the cases with and without RF electrodes visible in the images. In more detail, the DSC values between readers were significantly higher (p < 0.01) than those between automatic and manual processing: 88.8% vs. 77.0%, also independent of whether the needle was still included (86.8% vs. 75.9%, p < 0.05) in the dataset or not (90.9% vs. 78.1%, p < 0.05), respectively. Segmented volumes appeared to be smaller with automatic processing than for readers, but mean differences were not significant: 33.03 ml vs. 36.02 ml (p = 0.308). This also held for cases with the needle included (25.76 ml vs. 25.93 ml, p = 0.917) and without (40.30 ml vs. 46.10 ml, p = 0.249). The mean DSC value of both readers appeared to be smaller when the needle was present (75.9% vs. 78.1% without, p = 0.423) but the difference was not significant. In contrast, the inter-observer DSC was significantly higher when the needle was not present (90.9% vs. 86.8%, p = 0.025). Statistical differences in DSC values and segmentation volumes between methods were analyzed with Wilcoxon signed-rank tests, differences between both subgroups (with and without needle present) with a Mann-Whitney test. All analyses were done with SPSS (Version 20, IBM) using a level of significance of 0.05.

Table 7: Summary of results from Table 1 presenting only the cases where the RFA needle has been still in place and thus visible in the images (cases 1–5 and case 12): min, max, mean μ and standard deviation σ for six ablation zones.
Table 8: Summary of results from Table 1 presenting only the cases where the RFA needles has already been removed (cases 6–11): min, max, mean μ and standard deviation σ for six ablation zones.

The main difference between the automatic and manual segmentation is the computation time. Using the proposed automatic tool the segmentation was done in few seconds whereas the manual segmentation took between 48 s and 8 min 16 s (mean 3 min 13 s). Note: some initial results have been presented and discussed at the 100th annual meeting of the Radiological Society of North America (RSNA) in December 201438 and a German computer science workshop (BVM) in March 201539. However, at the RSNA meeting we only showed an initial statistic summary of the segmentation outcome and at the BVM workshop we presented a summarized description of the interactive algorithm in German. Moreover, all statistical results and a precise description of the methods are only presented in full details (and in English) within this manuscript.

Discussion

RFA of liver tumors induces areas of tissue necrosis, which can be visualized reliably in contrast enhanced CT. In this study, an interactive segmentation algorithm was applied to datasets of routine control CT scans after RFA of liver cancer for semi-automatic determination of thermally induced lesions. The segmentation accuracy was found to be sufficient for lesion segmentation in most of the cases although the manual segmentation still provided the best segmentation accuracy. The main advantage of the proposed tool over manual segmentation is its speed, which makes it an appealing alternative application for physicians.

As discussed, RFA can stand as a minimally invasive alternative to open surgery and might also be suitable for patients who are inoperable or refuse surgery. In RFA, post-interventional imaging is regularly performed to document the success of the treatment. When the interventional radiologist presumes that continuing of the therapy might be necessary, the RFA needle may still reside in the target organ during image acquisition. Due to hardening artefacts, image quality can be compromised significantly. However, these datasets have also been segmented and included into the study. For the evaluation of the contouring algorithm, manual slice-by-slice segmentations have been performed by two radiological experts, which enabled the DSC calculation between the manual and the semi-automatic segmentation outcome. Furthermore, we will provide unique datasets – including the manual segmentations – to the community for their own research purposes. In summary, the achieved research highlights of the presented work are:

  • Applying an interactive segmentation algorithm to RFA datasets from the clinical routine;

  • Incorporating intraprocedural patient acquisitions with and without RFA needles into the evaluation set;

  • Performing pure manual slice-by-slice segmentations by radiological experts for quantitative and qualitative evaluation;

  • Calculation of the Dice Similarity Coefficient (DSC) for statistical validation of the presented segmentation algorithm;

  • Providing the anonymized RFA data collection with the corresponding manual expert segmentation to the research community.

For comparison of our employed method with a state-of-the-art segmentation approach we used GrowCut40, which is freely available under the medical platform 3D Slicer (Fig. 6). The implementation is very user friendly because it does not require any precise parameter setting, rather the user initializes the method by marking areas in the image with simple strokes (Fig. 7). Furthermore, we had good experiences with certain types of brain tumors (Glioblastoma multiforme (GBM)41 and pituitary adenomas42). However, we tested the GrowCut algorithm with our RFA datasets and especially the cases where the needles are still in place – and thus visible within the images – caused massive problems (Fig. 8). What happens is, that GrowCut leaks along the needles, because it cannot handle the large gray value differences between the ablation zone (dark) and the RFA needle (bright). Overall we tested four cases from our datasets with GrowCut: two cases with the needles still in place and two cases without needles. For the two cases with needle in place, we could only archive a DSC of 50.64% (for the case presented in Fig. 8) and a DSC of 50.28% for the second case. As mentioned before, for both cases GrowCut leaks along the RFA needles, which explains the low Dice Scores. However, for the cases without needle in place we could archive Dice Scores of 80.3% and 76.29%. Here, in contrast to the cases with needle, the leaking did not occur, which resulted in acceptable segmentation results. But still, our approach could achieve higher Dice Scores of 82.23% and 83.43%, respectively. Moreover, the initialization of GrowCut (marking parts of the lesion and the background) takes between 30 and 60 seconds for a trained user, in contrast to our method that needs only a single seed point. In addition, the user has to wait several seconds for the GrowCut segmentation result (on the same PC we measured about 10 seconds for the interactive method), whereas our method provides the segmentation result immediately in real-time. This makes a refinement much more convenient. Note: the sharp edges of the GrowCut result (green) in the rightmost image of Fig. 8 occur, because the Slicer implementation restricts the segmentation area with a bounding box. The size of the bounding box depends on the initialization of the user and avoid GrowCut to use the whole image or volume for the automatic segmentation process. In addition to GrowCut we also tested and evaluated existing (implemented) segmentation approaches from other medical platforms, like The Medical Imaging Interaction Toolkit (MITK, www.mitk.org) and MeVisLab (see Results section) with our data. MITK, developed by the German Cancer Research Center (DKFZ) in Heidelberg, Germany, combines also the Insight Toolkit (ITK, www.itk.org) and the Visualization Toolkit (VTK, www.vtk.org) with an application framework. Amongst others, we applied the Fast Marching 3D implementation from MITK (version MITK-2015.05) to our data, but it could also not handle the extreme gray value differences between the needle (bright) and the ablation zone (dark). However, in contrast to GrowCut the Fast Marching algorithm did not leak along the needle, rather it excluded the needle. Also placing additional seed points direct on the needle and thus providing the algorithm the information about the bright parts did not lead to a satisficing segmentation result. The DSCs for the two cases with the needles still in place were 51.94% and 30.01%. But for the none needle cases we could achieve better results, which resulted in Dice Scores of 73.56% and 63.93% compared with the manual segmentations. However, beside the manual seed points that had to be placed for the approach to run, there are several segmentation parameters (Sigma, Alpha, Beta, Stopping value and Threshold), which make it on one hand quite difficult to find a good parameter setting. On the other hand the segmentation results could probably be better with a more precise parameter tuning. We also tried the Region Growing 3D from MITK but for all cases (needle and none needle) the approach leaked into the surrounding structure. However, if the seed point was placed directly on the needle within the image, the whole needle could be segmented quite well (due to the bright values of the needle). As Level-Set segmentation method we used the itkGeodesicActiveContourLevelSetImageFilter-module from the current MeVisLab version (MeVisLab 2.7, 18–06–2015), which wraps the ITK class GeodesicActiveContourLevelSetImageFilter43. Beside an arbitrary amount of user-defined seed points several parameters can be tuned for the segmentation, but a parameter change and re-segmentation took over two minutes, which made it quite time-consuming to find a good setting (note: the same laptop – described in the result section and used for the presented interactive segmentation – was also used for the MITK and MeVisLab segmentation approaches). However, the first case (with needle) showed a DSC of 60.98% and leaked along the needle. The second case with needle had only a DSC of 22.56% and leaked along the needle and into the surrounding structures. This happened because the gray values of the ablation zone were more similar to the surrounding tissue for this case. For the none needle cases we achieved Dice Scores of 45.63% and 70.18% and the lower DSC resulted, because we could only archive an under or an over segmentation of the ablation zone (the DSC presented here is for the under segmentation).

Figure 6: Postinterventional CT scan of an RF ablation with the ablation needle still in place loaded into 3D Slicer.
Figure 6

On the left side is the Editor module which also contains the GrowCut algorithm.

Figure 7: GrowCut initialization for the segmentation of the RF ablation zone: the ablated zone is marked in green and the background is marked in yellow on three 2D slices, respectively.
Figure 7
Figure 8: GrowCut segmentation result (green) for the initialization from
Figure 8

Fig. 7. The GrowCut segmentation leaks along the RFA needle, because it cannot handle the large gray value differences between the ablation zone (dark) and the needle (bright). Note: the sharp edges of the segmentation result in the rightmost image occur because the GrowCut implementation in 3D Slicer automatically restricts the segmentation area with a bounding box that depends on the user initialization.

There are several areas for future work: we plan to integrate the interactive segmentation algorithm into a medical application framework for supporting ablation therapy. This is currently being developed within a project funded by the European FP7 program. In particular, our semi-automatic algorithm is targeted for the segmentation of difficult cases where an automatic segmentation fails. Furthermore, we also plan to provide the datasets acquired during the duration of the project over the next years and want to investigate the use for lesion tracking after one or several RFA interventions. Moreover, we hope to improve the segmentation outcomes with hardware accelerated medical image processing. One parameter of our approach, is the amount of nodes that are used for the graph construction. These result from the number of rays (sent out from the user-defined seed point) and the sampled nodes along every ray. As described in the Material and Methods section we use the surface points from a polyhedron to enable a faster calculation of the rays, which means we rely on recursively refined polyhedra, like 12, 32, 92, 272, 812, 2432 vertices. At present, we utilize maximal 812 surface points and maximal 40 points per ray to still allow an interactive real-time feedback segmentation for the user. A greater number of nodes would cause too much time delay between the single segmentation calculations and break the real-time character of the presented approach. However, we would like to apply a greater number of rays – e.g. 2432 – in the future and expect that this leads to even more precise segmentation results. However, the mincut calculation is currently the bottle neck which does not permit a greater number of nodes current standard hardware configuration, but a solution may be the execution on a GPU.

Material and Methods

Data Acquisition

For this retrospective study we used twelve intraprocedural datasets from ten patients, who underwent a radiofrequency ablation of a liver tumor. All datasets had a matrix size of 512 × 512 in x- and y-direction, and the number of slices in z direction ranged from 52 to 232. The slice thickness was either 1 or 2 mm, and the pixel spacing ranged from 0.679 to 0.777 with spacing between the slices of 1 to 3 mm. In six datasets (cases 1–5 and cases 12), the ablation needle was still remaining in the liver. All datasets have been acquired on multislice CT scanner (Philips Brilliance or Mx8000, Philips Healthcare, Netherlands). The data collection, analysis and a future publication, was approved by the Institutional Review Board (IRB) of the Medical University of Leipzig, Germany (reference number: 381–14–15122014). The methods were carried out in accordance with the approved guidelines. Hence, the RFA data will soon be freely available for own research purposes from the official webpage of the European Project ClinicImppact (please cite this publication if you use any of these in your own work): www.clinicimppact.eu.

Note: since this is a European Project scheduled at least for three years until 2017, we will add data from several clinical partners around Europe. Furthermore, even more ablation data from a comprehensive pig study can be found on the webpage of the European Project GoSmart44: www.gosmart-project.eu.

Manual Segmentation

To generate the Ground Truth of the ablation zones, we set up a segmentation framework under MeVisLab which provided classic contouring capabilities. This allowed the physicians to manually outline RFA lesions in the datasets slice-by-slice without any algorithmic support to avoid distortions. Afterwards, the single contours were voxelized (Fig. 9) and then merged to a 3D mask representing the ablation zone. These 3D masks have been used for comparison and quantitative evaluation with the semi-automatic segmentation results.

Figure 9: The left side shows a manually outlined ablation zone (red) on a single 2D slice and the right side presents the corresponding voxelized mask (white).
Figure 9

All voxelized 2D slices are merged to one 3D mask representing the whole ablation zone. This manual segmentation is used to calculate the Dice Similarity Coefficient (DSC) with the segmentation result from the algorithm.

Evaluation Metric

As an evaluation metric we used the Dice Similarity Coefficient (DSC)37. The DSC is a common metric widely used in medical image analysis where the agreement between two binary volumes M and S is calculated:

Here, M and S are the binary masks from the manual (M) and the semi-automatic (S) segmentations, V(·) denotes the volume (in mm3) and denotes the intersection. We computed the volume by counting the number of voxels and multiplying them with the physical size of voxels. In addition to the DSC, we measured the time it took an experienced radiologist in order to manually outline the ablation zones and compared it with the computation time of our semi-automatic segmentation procedure.

Semi-automatic Segmentations

The semi-automatic segmentation algorithm uses a spherical template to set up a three-dimensional graph G(V,E) around the ablation zone45. Overall, the graph consists of nodes and edges connecting these nodes. Thereby, the nodes are sampled along rays whose origin resides at a user-defined seed point while their direction points towards the surface of a polyhedron enclosing the seed point46. In addition we use two virtual nodes: a source s and a sink t to construct the graph. After graph construction, the segmentation result is calculated by dividing the graph into two sets of nodes via a Min-Cut/Max-Flow algorithm47. As a result, one set of nodes represents the ablation zone (foreground) and is bound to one of the virtual nodes, e.g. the sink t. The other set represents the surrounding structures (background) and is bound to the other virtual node (in this case the source s). The energy function of the graph cuts follows the Gibbs model48 and the cost function relates to the publication of Li et al.49 where you need a fixed average gray value of the region to calculate the single weights. Moreover, the approach was designed to segment longish structures like the aorta50,51 and needed a centerline as input, which makes it not applicable for an interactive real-time approach. In contrast, our approach needs only one seed point that can also be used to derive the average gray value on the fly during the segmentation and thus makes it gray value independent. Furthermore, this means that it can also handle cases with different average ablation values. Amongst others, the basic segmentation scheme has already been successfully applied to pituitary adenomas52 and prostate central glands (PCG)53. The underlying workflow is shown in Fig. 10 for an post-interventional CT scan of a patient whose liver tumor has been treated with an RFA: a spherical template (blue, leftmost image) has been applied to set up the three-dimensional graph G(V,E) – consisting of nodes and edges – in the second image from the left. This graph G is automatically constructed at a seed point position within the image (here indicated by several 2D slices) – note: in general the graph is not visible to the user during the segmentation process, rather it is automatically constructed in the background. In this example the RFA needle is still visible within the image (bright parts inside the liver) and especially in the second and third image the characteristic umbrella shape of the fully expanded RFA needle is noticeable. Afterwards, as above already mentioned, the graph is automatically divided by the Min-Cut/Max-Flow algorithm into two disjoint sets of nodes: one set representing the ablation zone and the other one the surrounding background. However, the transition between these two sets – or in other words the last nodes of every ray that are still bound to the user-defined seed point – is the actual segmentation result that is displayed to the user in the rightmost image of Fig. 10 (red). Additionally, note that these red dots in the rightmost image (representing the segmentation result) were actually sampled nodes during the graph construction in the beginning.

Figure 10: Overall workflow of the RF ablation zone segmentation: a sphere (left) is used to construct a graph (second image from the left).
Figure 10

The graph is constructed (not visible to the user) at the user-defined seed point position within the image (third image from the left). Finally, the segmentation result (red) corresponding to the seed point is shown to the user (rightmost image).

In more detail, the whole set of edges E (of a constructed graph G) consists of edges that connect the sampled nodes to the source and the sink, and edges that establish connections between two sampled nodes. However, in a first step rays (leftmost image of Fig. 11, blue lines) are send out from the user-defined seed point through the surface points of an polyhedron (leftmost image of Fig. 11, red dots). Afterwards, the graphs nodes are sampled along these rays between the user-defined seed point and the surface points of the polyhedron (image in the middle of Fig. 11, red dots). Then, the so called intra-edges between the nodes (red) along one ray are constructed (rightmost image of Fig. 11, blue arrows). These edges ensure that all nodes below a segmented surface in the graph are included to form a closed set, or in other words, the interior of the object (the ablation zone) is separated from the exterior (the surrounding background) in the data. Next, the inter-edges of the graph are constructed. These edges establish (a) connections between nodes that have been sampled along different rays and (b) connections from the sampled nodes to the source and the sink. Thereby, the intra-edges (a) constrain the set of possible segmentations and enforce the smoothness of the segmentation result over an integer parameter Δr. The larger this parameter is, the larger is also the number of possible segmentations and a value of zero enforces the segmentation result to be a sphere. Figure 12 exemplarily shows the inter-edges (between the nodes sampled along three rays) for three different Δr values: zero (leftmost image), one (image in the middle) and two (rightmost image). Supplementary, Fig. 13 demonstrates how the different Δr values influence the segmentation outcome for two adjacent rays and their sampled nodes. For the leftmost image of Fig. 13, a Δr value of zero was chosen. Thus, only inter-edges between nodes on the same “node level” are established, which will also lead the mincut to be on the same “node level” of different rays (green); elsewise “costs” would arise by cutting the inter-edges (which will the mincut automatically avoid). Furthermore, the position (or the “node level”) of the cut (in this example between the second and third nodes when counting from the bottom), depends on other factors like the gray values in the image. However, in any case the segmentation outcome will be a sphere and the “node level” of the cut determines the size of this sphere. The next three images of Fig. 13 will illustrate what happens for a Δr value of one. As you can see, the inter-edges have been constructed between different “node levels” equivalent to the image in the middle of Fig. 12. For a Δr value of one, the mincut has definitely to cut two inter-edges (red scissors), regardless if the cut is on the same “node level” (second image from the right of Fig. 13) or if the cut is between different “node levels” (second image from the right of Fig. 13). However, for a Δr value of one, this only applies if the cut also appears within a maximum “node level” distance of one. For a cut with a “node level” distance of two (or larger) and a Δr value of one, “costs” for cutting four (or more) inter-edges would arise as shown in the rightmost image of Fig. 13 (note: the mincut algorithm will automatically avoid this cut). Accordingly, this principle applies to larger Δr values, like 2, 3, etc. Once the connections between the nodes (sampled nodes and virtual nodes) have been established, a weight is assigned to every edge. These weights are the costs that arise when the mincut algorithm cuts through the edges. The intra-edges along one ray and the inter-edges resulting from the Δr value are assigned a maximum value ∞, e.g. the maximum float value when implemented. The weights of the edges connecting the sampled nodes with the two virtual nodes source and sink depend on the gray values of the image, namely the positions of the sampled nodes. Technically speaking, the weights depend on cost-values describing the absolute value of the difference of an average ablation zone value and the gray value at . Figure 14 provides an example for nodes that have been sampled along one ray and then bound via the absolute value of the difference of an average (ablation zone) value to the source (red) and the sink (blue). Additionally, the intra-edges (1.-8.) are drafted in the Figure. As you can see the lowest cost of ninety arise when the mincut algorithm cuts through the third intra-edge (green). The following Fig. 15 emphasizes the value of the intra-edges: if removed from Fig. 14, the mincut could avoid cutting any edges (green) resulting a total cost of zero. However, the average ablation zone (gray) value has a huge influence on the segmentation result, but we can assume that the user places the seed point inside the ablation zone. Thus we can use this information to determine the average ablation zone value during the interactive segmentation. In addition, to avoid outliers, e.g. produced by the RFA needle (if the needle is still in place and therefore visible within the image), we integrate over a small area of about one cm3 around the current position of the user-defined seed point. In doing so, we handle situations, where the user places the seed point at a position of the RFA needle within the image and the corresponding gray value is much too bright/large to be used as an average ablation zone value. This strategy also enables to automatically handle different image acquisitions and makes the algorithm kind of disease invariant and scan independent (for example against possible inhomogeneities within ablation zones). But the greatest advantage is still the automatic handling of scans with and without RFA needles visible within the medical image.

Figure 11: More detailed graph construction for intra-edges of the segmentation approach.
Figure 11

Left image: in a first step rays (blue) a send through the surface points (red) of a polyhedron. Middle image: the graph’s nodes are sampled along the rays from the previous image. Right image: the intra-edges (blue arrows) are constructed between the sampled nodes (red).

Figure 12: Principle of the inter-edge constructions (red arrows) between nodes (red dots) that have been sampled along three different rays.
Figure 12

The leftmost image shows the inter-edges for a Δr value of zero. Thus the inter-edges are all on the same “node level”. The image in the middle shows the inter-edges for a Δr value of one, resulting in edges that connect nodes from different “node levels”, however, with a maximum level difference of one. Finally, the rightmost image shows the inter-edges for a Δr value of two, connecting nodes with a “node level” distance of two. Similarly, this practice also applies for larger values of Δr, e.g. three or four and so on.

Figure 13: This figure shall illustrate the course of action of the mincut for different Δr values.
Figure 13

On the left side, the inter-edges for a Δr value of zero have been constructed. Thus the mincut will separate all nodes on the same “node level” to avoid costs for cutting inter-edges. Note that the location of the cut (green) depends here on other factors, like the underlying gray values. Similar to the second image of Fig. 12, the inter-edges for a Δr value have been constructed for the following three rightmost images. As you can see, the mincut has two options for cutting inter-edges (red scissors) producing the same costs: (1.) on the same “node level” (second image from the right) and (2.) cutting on different “node levels” with a node distance on one (third image from the right). However, cutting on different “node levels” with a node distance on two (or greater) will produce higher cost and therefore will automatically be avoided by the mincut algorithm as seen in the rightmost figure. The same principle applies also for larger values of Δr.

Figure 14: This image shows several sampled nodes for a graph that has been bound by edges and weights to the source (red) and the sink (blue).
Figure 14

Besides, the eight intra-edges between the sampled nodes have been generated. As seen, the minimal s-t-cut cuts the third intra-edge and produces a total cost of ninety.

Figure 15: This mincut example adapted from Fig. 14 discloses the benefit of the intra-edges (which have not been generated here): the s-t-cut (green) will avoid cutting the inter-edges to produce an overall “cutting” cost of zero.
Figure 15

Hence, this would no longer ensure that all nodes below a segmented surface in the graph are included to form a closed set.

Following, this basic segmentation scheme was later turned into an interactive real-time approach54 and for this study it was enhanced by an additional refinement option55,56,57. For starting the interactive segmentation process, the user places a seed point roughly in the middle of the ablation zone on a 2D slice. From this seed point we construct the graph and automatically calculate and display the segmentation result for the user. The user now has the option to drag the seed point around to interactively generate new segmentation results depending on the current seed point positions. Additionally, the user can interrupt the dragging of the seed point by releasing the mouse button and add an arbitrary amount of seed points on the border of the ablation zone. Thus, the algorithm gets supporting input about the location of the border and steers the behavior of the min-cut. However, the user can always come back to the initial seed point and start dragging it around in the image again (note: additionally placed seed points will not get lost then. Rather these stay fixed and continue to influence/restrict the min-cut). In fact, the algorithm is sensitive to seed point selection. However, due to the interactive nature of the algorithm the user can quickly change the seed point location when not satisfied with the immediate finding and thus optimize the segmentation result.

Additional Information

How to cite this article: Egger, J. et al. Interactive Volumetry of Liver Ablation Zones. Sci. Rep. 5, 15373; doi: 10.1038/srep15373 (2015).

References

  1. 1.

    et al. Multimodal treatment of hepatocellular carcinoma. Eur. J. Intern. Med. 25, 430–437 (2014).

  2. 2.

    et al. Percutaneous radiofrequency ablation of hepatocellular carcinoma as a bridge to liver transplantation. Hepatology 41, 1130–1137 (2005).

  3. 3.

    et al. Radiofrequency thermal ablation of liver tumors. Eur. Radiol. 15, 884–894 (2005).

  4. 4.

    & Radiofrequency ablation of hepatocellular carcinoma: pros and cons. Gut Liver 4, 113–8 (2010).

  5. 5.

    et al. American Society of Clinical Oncology 2009 clinical evidence review on radiofrequency ablation of hepatic metastases from colorectal cancer. J. Clin. Oncol. 28, 493–508 (2010).

  6. 6.

    & Therapeutic efficacy comparison of radiofrequency ablation in hepatocellular carcinoma and metastatic liver cancer. Exp. Ther. Med. 7, 897–900 (2014).

  7. 7.

    In A Survey of Techniques for Human Detection from Video. M.S. Scholarly paper, University of Maryland 1–15 (2005).

  8. 8.

    et al. Automatic volumetry on MR brain images can support diagnostic decision making. BMC Medical Imaging. 8, 9 (2008).

  9. 9.

    , & Snakes: active contour models. Int. J. Comput. Vis. 1, 321–331 (1987).

  10. 10.

    , , , & Detection and visualization of endoleaks in CT data for monitoring of thoracic and abdominal aortic aneurysm stents. Paper presented at SPIE 6918, Medical Imaging 2008: Visualization, Image-Guided Procedures, and Modeling, 69181F (17 March 2008). 10.1117/12.769414.

  11. 11.

    & Deformable models in medical image analysis: a survey. Med. Image Anal. 1, 91–108 (1996).

  12. 12.

    et al. Segmentation of Vertebral Bodies in MR Images Paper presented at Vision, Modeling, and Visualization (VMV12) 135–142 (2012), 10.2312/PE/VMV/VMV12/135-142.

  13. 13.

    , & Active Appearance Models. Paper presented at the European Conference on Computer Vision – ECCV’98. Freiburg, Germany. Springer, 2, 484–498 (1998).

  14. 14.

    et al. Segmentation of aortic aneurysms in CTA images with the statistic approach of the active appearance models. Paper presented at Bildverarbeitung für die Medizin (BVM). Berlin, Springer, 51-55 (2008).

  15. 15.

    & Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22, 888–905 (2000).

  16. 16.

    et al. A fast and robust graph-based approach for boundary estimation of fiber bundles relying on fractional anisotropy maps. Paper presented at the 20th International Conference on Pattern Recognition (ICPR), Istanbul Turkey, IEEE, 4016-4019 (2010).

  17. 17.

    , & A modified possibilistic fuzzy c-means clustering algorithm for bias field estimation and segmentation of brain MR image. Comput. Med. Imaging Graph. 35, 383–97 (2011).

  18. 18.

    , , , & Image segmentation by EM-based adaptive pulse coupled neural networks in brain magnetic resonance imaging. Comput. Med. Imaging Graph. 34, 308–20 (2010).

  19. 19.

    , & Fully automatic liver tumor segmentation from abdominal CT scans. Paper presented at the International Conference on Computer Engineering and Systems (ICCES), Cairo, Egypt. IEEE, 197-202 (2010).

  20. 20.

    , et al. Liver tumor segmentation using kernel-based FGCM and PGCM in Abdominal Imaging 7029, 99–107 (2012).

  21. 21.

    & 3D Fuzzy liver tumor segmentation in Information Technologies in Biomedicine 7339, 47–57 (2012).

  22. 22.

    , , , & Interactive liver tumor segmentation from CT scans using support vector classification with watershed. Paper presented at the Annual International Conference of the IEEE, Engineering in Medicine and Biology Society (EMBC). Boston, MA, USA IEEE, 6005–6008 (2011).

  23. 23.

    , , & Liver tumors segmentation from CTA images using voxels classification and affinity constraint propagation. Int. J. Comput. Assist. Radiol. Surg. 6, 247–55 (2011).

  24. 24.

    et al. Semi-automatic segmentation of liver tumors from CT scans using bayesian rule-based 3D region growing. Paper presented at Grand Challange Liver Tumor Segmentation, Medical Image Computing and Computer Assisted Intervention (MICCAI) Workshop, New York City, NY, USA, Springer, 1–10, (2008, September 6-10).

  25. 25.

    & Semi-automatic liver tumor segmentation with hidden Markov measure field model and non-parametric distribution estimation. Med. Image Anal. 16, 140–149 (2012).

  26. 26.

    et al. Radiofrequency ablation of liver tumors: quantitative assessment of tumor coverage through CT image processing. BMC Med. Imaging 13 (2013), 10.1186/1471-2342-13-3.

  27. 27.

    & Interactive live-wire boundary extraction. Med. Image Anal. 1, 331–341 (1997).

  28. 28.

    , & Liver segmentation from computed tomography scans: a survey and a new algorithm. Artif. Intell. Med. 45, 185–196 (2009).

  29. 29.

    et al. Radio frequency ablation registration, segmentation, and fusion tool. IEEE Trans. Inf. Technol. Biomed. 10, 490–6 (2006).

  30. 30.

    et al. Semiautomated versus manual evaluation of liver metastases treated by radiofrequency ablation. J. Vasc. Interv. Radiol. 21, 245–51 (2010).

  31. 31.

    et al. OncoTREAT: A software assistant for cancer therapy monitoring. Int. J. Comput. Assist. Radiol. Surg. 1, 231–242 (2007).

  32. 32.

    et al. Workflow oriented software support for image guided radiofrequency ablation of focal liver malignancies. Paper presented at SPIE 6509, Medical Imaging 2007: Visualization and Image-Guided Procedures, 650919 (21 March 2007), 10.1117/12.709503.

  33. 33.

    et al. Fast automated segmentation and reproducible volumetry of pulmonary metastases in CT-scans for therapy monitoring. Paper presented at Medical Image Computing and Computer-Assisted Intervention (MICCAI), Saint-Malo, France, Springer, 3217, 933-941 (2004).

  34. 34.

    et al. Liver metastases: 3D shape-based analysis of CT scans for detection of local recurrence after radiofrequency ablation. Radiology 241, 243–50 (2006).

  35. 35.

    et al. Variability of size and shape of necrosis induced by radiofrequency ablation in human livers: a volumetric evaluation. Ann. Surg. Oncol. 11, 420–5 (2004).

  36. 36.

    et al. State of the art in computer-assisted planning, intervention, and assessment of liver-tumor ablation. Crit. Rev. Biomed. Eng. 38, 31–52 (2010).

  37. 37.

    et al. Measuring intra- and inter-observer agreement in identifying and localizing structures in medical images. Paper presented at the IEEE International Conference on Image Processing, San Antonio, TX, USA IEEE, 81–84 (2006).

  38. 38.

    et al. Novel semiautomatic real-time CT segmentation tool and preliminary clinical evaluation on thermally induced lesions in the liver. Paper presented at the 100th Annual Meeting of The Radiological Society of North America (RSNA), Chicago, IL, USA PHS, 171 (2014).

  39. 39.

    et al. Semi-automatic segmentation of ablation zones in post-interventional CT data. Paper presented at Bildverarbeitung für die Medizin (BVM), Lübeck, Germany. Springer, (2015).

  40. 40.

    & GrowCut: interactive multi-label N-D image segmentation. Paper presented at Graphicon, Novosibirsk Akademgorodok, Russia, Proc. Graphicon, 150–156 (2005, June 20–24).

  41. 41.

    et al. GBM volumetry using the 3D Slicer medical image computing platform. Sci. Rep. 3, 1364 (2013).

  42. 42.

    , , & Pituitary adenoma volumetry with 3D Slicer. PLoS ONE 7, e51788 (2012).

  43. 43.

    , , & Geodesic Active Contours. Int J Comput Vis. 22, 61–97 (1997).

  44. 44.

    et al. High-resolution contrast enhanced multi-phase hepatic computed tomography data from a porcine radio-frequency ablation study. Paper presented at the IEEE 11th International Symposium on Biomedical Imaging (ISBI), Beijing, China, IEEE, 81–84 (2014).

  45. 45.

    et al. Nugget-Cut: a segmentation scheme for spherically- and elliptically-shaped 3D objects. Paper presented at the 32nd Annual Symposium of the German Association for Pattern Recognition (DAGM), Darmstadt, Germany, 6376, 383–392 (2010).

  46. 46.

    , , & A fast vessel centerline extraction algorithm for catheter simulation. Paper presented at the 20th IEEE International Symposium on Computer-Based Medical Systems (CBMS’07), Maribor, Slovenia, 177–182 (20–22 June 2007), 10.1109/CBMS.2007.5.

  47. 47.

    & An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004).

  48. 48.

    & Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images. IEEE Trans Pattern Anal Mach Intell. 6, 721–741 (1984).

  49. 49.

    , , & Optimal surface segmentation in volumetric images – a graph-theoretic approach. IEEE Trans Pattern Anal Mach Intell. 28, 119–134 (2006).

  50. 50.

    et al. Aorta segmentation for stent simulation. 12th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Cardiovascular Interventional Imaging and Biophysical Modelling Workshop, London, UK, Springe, 1-10 (18 September 2009).

  51. 51.

    et al. Aortic volume as an indicator of disease progression in patients with untreated infrarenal abdominal aneurysm. Eur J Radiol. 81, e87–93 (2011).

  52. 52.

    , , , & Pituitary adenoma segmentation. Paper presented at the International Biosignal Processing Conference, Berlin, Germany. Paper-ID 061, Proc. Biosignal, 1–4 (14–16 July 2010).

  53. 53.

    PCG-Cut: graph driven segmentation of the prostate central gland. PLoS One 8, e76645 (2013).

  54. 54.

    , , , & Interactive-Cut: real-time feedback segmentation for translational research. Comput. Med. Imaging Graph. 38, 285–95 (2014).

  55. 55.

    , , & Manual refinement system for graph-based segmentation results in the medical domain. J. Med. Syst. 36, 2829–39 (2012).

  56. 56.

    et al. A flexible semi-automatic approach for glioblastoma multiforme segmentation. Paper presented at the International Biosignal Processing Conference, Berlin, Germany, Paper ID 060, Proc. Biosignal, 1–4 (14–16 July 2010).

  57. 57.

    , , & A medical software system for volumetric analysis of cerebral pathologies in magnetic resonance imaging (MRI) data. J. Med. Syst. 36, 1–13 (2011).

Download references

Acknowledgements

This work received funding from the European Union in FP7: Clinical Intervention Modelling, Planning and Proof for Ablation Cancer Treatment (ClinicIMPPACT, grant agreement no. 610886) and Generic Open-end Simulation Environment for Minimally Invasive Cancer Treatment (GoSmart, grant agreement no. 600641). Dr. Bernhard Kainz is supported by an EU FP7 MC-IEF 325661 grant and Dr. Xiaojun Chen receives support from NSFC (National Natural Science Foundation of China) grant 81171429. Dr. Jan Egger receives funding from BioTechMed-Graz (“Hardware accelerated intelligent medical imaging”). The authors would like to thank the clinical staff enabling this study and MeVis in Bremen, Germany, for providing an academic license for the MeVisLab software. Videos demonstrating the interactive segmentation can be found in the following YouTube channel: https://www.youtube.com/c/JanEgger/videos.

Author information

Author notes

    • Dieter Schmalstieg
    •  & Michael Moche

    These authors jointly supervised this work.

Affiliations

  1. Department of Neuroscience and Biomedical Engineering, Aalto University, Rakentajanaukio 2 C, 02150 Espoo, Finland

    • Jan Egger
    • , Tuomas Alhonnoro
    •  & Mika Pollari
  2. Institute for Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Austria

    • Jan Egger
    • , Philip Voglreiter
    • , Mark Dokter
    • , Michael Hofmann
    •  & Dieter Schmalstieg
  3. BioTechMed-Graz, Austria

    • Jan Egger
  4. Department of Diagnostic and Interventional Radiology, Leipzig University Hospital, Liebigstraße 20, 04103 Leipzig, Germany

    • Harald Busse
    • , Philipp Brandmaier
    • , Daniel Seider
    • , Matthias Gawlitza
    • , Steffen Strocka
    •  & Michael Moche
  5. Department of Computing, Imperial College London, Huxley Building, 180 Queen’s Gate, London SW7 2AZ, UK

    • Bernhard Kainz
  6. Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstraße 60, 70174 Stuttgart, Germany

    • Alexander Hann
  7. Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Dong Chuan Road 800, Shanghai Post Code: 200240, China

    • Xiaojun Chen

Authors

  1. Search for Jan Egger in:

  2. Search for Harald Busse in:

  3. Search for Philipp Brandmaier in:

  4. Search for Daniel Seider in:

  5. Search for Matthias Gawlitza in:

  6. Search for Steffen Strocka in:

  7. Search for Philip Voglreiter in:

  8. Search for Mark Dokter in:

  9. Search for Michael Hofmann in:

  10. Search for Bernhard Kainz in:

  11. Search for Alexander Hann in:

  12. Search for Xiaojun Chen in:

  13. Search for Tuomas Alhonnoro in:

  14. Search for Mika Pollari in:

  15. Search for Dieter Schmalstieg in:

  16. Search for Michael Moche in:

Contributions

Conceived and designed the experiments: J.E., H.B. and M.M. Performed the experiments: J.E., H.B. and M.M. Analyzed the data: J.E., H.B. and M.M. Contributed reagents/materials/analysis tools: J.E., H.B., P.B., D.S., M.G., S.S., P.V., M.D., M.H., B.K., A.H., X.C., T.A., M.P., M.M. and D.S. Wrote the paper: J.E., H.B., P.B. and M.P.

Competing interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to Jan Egger.

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1038/srep15373

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.