NuInsSeg: A fully annotated dataset for nuclei instance segmentation in H&E-stained histological images

In computational pathology, automatic nuclei instance segmentation plays an essential role in whole slide image analysis. While many computerized approaches have been proposed for this task, supervised deep learning (DL) methods have shown superior segmentation performances compared to classical machine learning and image processing techniques. However, these models need fully annotated datasets for training which is challenging to acquire, especially in the medical domain. In this work, we release one of the biggest fully manually annotated datasets of nuclei in Hematoxylin and Eosin (H&E)-stained histological images, called NuInsSeg. This dataset contains 665 image patches with more than 30,000 manually segmented nuclei from 31 human and mouse organs. Moreover, for the first time, we provide additional ambiguous area masks for the entire dataset. These vague areas represent the parts of the images where precise and deterministic manual annotations are impossible, even for human experts. The dataset and detailed step-by-step instructions to generate related segmentation masks are publicly available on the respective repositories.


Background & Summary
With the advent of brightfield and fluorescent digital scanners that produce and store whole slide images (WSIs) in digital form, there is a growing trend to exploit computerized methods for semi or fully-automatic WSI analysis 1 .In digital pathology and biomedical image analysis, nuclei segmentation plays a fundamental role in image interpretation 2 .Specific nuclei characteristics such as nuclei density or nucleus-to-cytoplasm ratio can be used for cell and tissue identification or for diagnostic purposes such as cancer grading [2][3][4] .Nuclei instance segmentation masks enable the extraction of valuable statistics for each nucleus 5 .While experts can manually segment nuclei, this is a tedious and complex procedure as thousands of instances can appear in a small patch of a WSI 4,6 .It is also worth mentioning that due to various artifacts such as folded tissues, out-of-focus scanning, considerable variations of nuclei staining intensities within a single image, and the complex nature of some histological samples (e.g., high density of nuclei), accurate and deterministic manual annotation is not always possible, even for human experts.The inter-and intraobserver variability reported in previous studies showing a low level of agreement in the annotation of cell nuclei by medical experts confirms this general problem 5,7 .
In recent years, many semi-and fully-automatic computer-based methods have been proposed to perform nuclei instance segmentation automatically and more efficiently.A wide range of approaches from classical image processing to advanced machine learning methods have been proposed for this task 4,7 .Up to this point, supervised deep learning (DL) methods such as Mask R-CNN and its variants 8,9 , distance-based methods 10,11 and multi encoder-decoder approaches 6,12,13 have shown the best instance segmentation performances.However, to train these models, fully annotated datasets are required which is difficult to acquire in the medical domain 4,5,14 .
A number of fully annotated nuclei instance segmentation datasets are available.These datasets were introduced for various types of staining such as Hematoxylin and Eosin (H&E), immunohistochemical and immunofluorescence stainings 4,[15][16][17] .The most common staining type in routine pathology is H&E-staining.Therefore, most introduced datasets were based on this staining method.Although these datasets are valuable contributions to the research field and help researchers to develop better  1 shows the most prominent fully manually annotated H&E-stained nuclei segmentation datasets that have been actively used by the research community in the past few years.Besides these datasets, some semi-automatically generated datasets such as PanNuke 18 , Lizard 19 and Hou et al. dataset 20 have also been introduced in the past.To generate these datasets, various approaches, such as using trained backbone models or point annotation, were exploited [21][22][23] .However, training models based on semi-automatically generated datasets may introduce a hidden bias towards the reference model instead of learning the true human expert style for nuclei instance segmentation.
In this work, we introduce NuInsSeg, one of the most extensive publicly available datasets for nuclei segmentation in H&E-stained histological images.The primary statistic of this dataset is presented in the last row of Table 1.Our dataset can be used alone to develop, test, and evaluate machine learning-based algorithms for nuclei instance segmentation or can be used as an independent test set to estimate the generalization capability of the already developed nuclei instance segmentation methods.

Sample preparation
The NuInsSeg dataset contains fully annotated brightfield images for nuclei instance segmentation.The H&E-stained sections of 23 different human tissues were provided by Associate Professor Adolf Ellinger, PhD from the specimen collection of the Department of Cell Biology and Ultrastructural Research, Center for Anatomy and Cell Biology, Medical University of Vienna.We only obtained the stained tissue sections, not the original tissues.These images were only used for teaching purposes for a long time where no ethic votum applied.Some of the human tissues were formaldehyde-fixed, embedded in celloidin and sectioned at 15 ≈ 20µm (jejunum, kidney, liver, oesophagus, palatine tonsil, pancreas, placenta, salivary gland, spleen, tongue).The other human tissues were formaldehyde-fixed and paraffin-embedded (FFPE) and sectioned at 4 ≈ 5µm (cerebellum, cerebrum, colon, epiglottis, lung, melanoma, muscle, peritoneum, stomach (cardia), stomach (pylorus), testis, umbilical cord, and urinary bladder).Mouse tissue samples from bone (femur), fat (subscapularis), heart, kidney, liver, muscle (tibialis anterior muscle), spleen, and thymus were obtained from 8-week-old male C57BL/6J mice28.4µm sections of the FFPE tissue samples were stained with H&E (ROTH, Austria) and coverslipped with Entellan (Merck, Germany).

Field of view and patch selection
The scanning system stores individual 2048 × 2048 Field of Views (FOV) with their respective locations in order to be able to combine them into a WSI.Instead of using WSIs, we utilized the FOVs to generate the dataset.A senior cell biologist selected the most representative FOVs for each human and mouse WSI.From each FOV, a 512 × 512 pixel image was extracted by central cropping.These images were saved in lossless Portable Network Graphics (PNG) format.In total, 665 raw image patches were created to build the NuInsSeg dataset.

Generation of ground truth, auxiliary, and ambiguous area segmentation masks
We used ImageJ 28 (version 1.53, National Institutes of Health, USA) to generate the ground truth segmentation masks.We followed the same procedure suggested in 5 to label nuclei.We used the region of interest (ROI) manager tool (available on the Analysis tab) and the freehand option to delineate the nuclei borders.We manually draw the nuclei border for each instance until all nuclei were segmented for a given image patch.Although some semi-automatic tools such as AnnotatorJ with U-Net backbone 29 could be used to speed up the annotation, we stuck to fully manual segmentation to prevent any hidden bias toward the semi-autonomic annotation method.The delineated ROIs were saved as a zip file, and the Matlab software (version 2020a) was then used to create binary and labeled segmentation images (as PNG files).Besides the original raw image patches, binary and labeled segmentation masks, we also publish a number of auxiliary segmentation masks that can be useful for developing computer-based segmentation models.Auxiliary segmentation masks, including border-removed binary masks, elucidation distance maps of nuclei, weighted binary masks (where higher weights are assigned in the borders of touching objects), are published along with our dataset.The developed codes to generate these masks are available on the published GitHub repository.Moreover, we annotated the ambiguous areas in all images of the dataset for the first time.Indicating ambiguous regions was partially provided in the test set of the MoNuSAC challenge 30 , but in this work, we provide it for the entire dataset.We used an identical procedure and software to create the ambiguous segmentation masks.These vague areas consist of image parts with very complex appearances where the accurate and reliable manual annotation is impossible.This is potentially helpful for in-depth analysis and evaluation of any automatic model for nuclei instance segmentation.Manual segmentation of nuclei and ambiguous areas detection were performed by three students with a background in cell biology.The annotations were then controlled by a senior cell biologist and corrected when necessary.Some example images, along with related segmentation and vague masks, are shown in Figure 1.

Data Records
The NuInsSeg dataset is publicly available on a published page on the Kaggle platform (https://www.kaggle.com/datasets/ipateam/nuinsseg).The related code to generate the binary, labeled, and auxiliary segmentation masks from

Technical Validation
To create a baseline segmentation benchmark, we randomly split the dataset into five folds with an equal number of images per fold (i.e., 133 images per fold).We used the Scikit-learn Python package to create the folds with a fixed random state to reproduce the results (splitting code is available on the Kaggle and Github pages).Based on the created folds, we developed a number of DL-based segmentation models and evaluated their performance based on five-fold cross-validation.To facilitate to use of our dataset and developing segmentation models, we published our codes for two standard segmentation models, namely shallow U-Net and deep U-Net models 31 on the Kaggle platform 1 .The model architectures of the shallow U-Net and deep U-Net are very similar to the original U-Net model but we added drop out layers between all convolutional layers in both encoder and decoder parts.Four and five convolutional blocks were used in the encoder and decoder parts of the shallow U-Net and deep U-Net, respectively.The model architecture of these two models is publicly available at our published kernels on our NuInsSeg page on the Kaggle platform.Besides these two models, we also evaluated the performance of the attention U-Net 32 , residual attention U-Net 32,33 , two-stage U-Net 34 , and the dual decoder U-Net 13 models.The architectural details of these models were published in the respective articles.We performed an identical five-fold cross-validation scheme in all experiments to compare the results.For evaluation, we utilized similarity Dice score, aggregate Jaccard index (AJI), and panoptic quality (PQ) scores as suggested in former studies 5,6,35 .The segmentation performance of the aforementioned models is reported in Table 3.As the results show, residual attention U-Net delivers the best overall Dice score between these models, but dual decoder U-Net provides the best average AJI and PQ scores.Interestingly, the dual decoder model achieved the best overall PQ score in the MoNuSAC post challenge leaderborad 17,36 , and it also achieved the best instance-based segmentation scores for the NuInsSeg dataset.It should be noted that these results can be potentially improved by using well-known strategies such as ensembling 37 , stain augmentation 38 or test time augmentation 39 but achieving the best segmentation scores is out of the focus of this study.Instead, these results could be used as baseline segmentation scores for comparison to other segmentation models in the future, given that the same five-fold cross-validation scheme is used.

Usage Notes
Our dataset, including raw image patches, binary and labeled segmentation masks, and other auxiliary segmentation masks, is publicly available on the published NuInsSeg page on the Kaggle platform.
Step-by-step instructions to perform manual annotations and related codes to generate the main and auxiliary segmentation masks are available at our published Github repository.We also provide three kernels on the Kaggle platform to facilitate using our dataset.One kernel is devoted to explanatory data analysis (EDA), where interested researchers can visualize and explore different statistics of the NuInsSeg dataset.The other two kernels consist of related codes to perform five-fold cross-validation based on two DL-based models, namely shallow U-Net and deep U-Net, as described in the previous section.Different Python packages were used in the coding of these kernels.To report statistics and visualize data in the EDA kernel, we mainly used Pandas (version 1.We explicitly published our dataset on the Kaggle platform, where limited free computational resources are available.Therefore, interested researchers can directly access our dataset and develop ML-or DL-based algorithms to perform nuclei instance segmentation on the NuInsSeg dataset.However, there is no limitation to downloading and saving the dataset on local systems and performing analysis using local or other cloud-based computational resources.
It is worth mentioning that the NuInsSeg dataset can be used alone to train, validate, and test any segmentation algorithm, or it can be used as an independent test set to measure the generalization capability of already developed segmentation models.

Figure 1 .
Figure 1.Example images and manual segmentation masks of three human organs from the NuInsSeg dataset.The first three columns show the original images, the labeled and the binary mask, respectively.The represented images in the fourth to sixth columns show auxiliary segmentation masks that can be beneficial for the development of segmentation algorithms.The last column shows the vague areas where accurate and deterministic manual segmentation is impossible.Some images do not contain ambiguous regions, such as the represented spleen image in the last row.

Table 1 .
Publicly available H&E-stained nuclei segmentation datasets.In the table, TCGA refers to The Cancer Genome Atlas, UHCW refers to University Hospitals Coventry and Warwickshire, and MUV refers to Medical University of Vienna.The last row of the table represents the NuInsSeg dataset introduced in this work.

Table 2 .
Details of the NuInsSeg dataset per human and mouse organ the ImageJ ROI files is also available on the NuInsSeg published GitHub repository https://github.com/masih4/NuInsSeg.This dataset contains 665 image patches with 30,698 segmented nuclei from 31 human and mouse organs.The organ-specific details of the generated dataset are shown in Table2.As shown in the table, the nuclei density in some tissues/organs (e.g., mouse spleen) is much higher in comparison to other tissues/organs (e.g., mouse muscle).

Table 3 .
NuInsSeg segmentation benchmark results based on five-fold cross-validation