Main

Images and their analyses are widespread in life science and medicine. Microscopy imaging is a dynamic area of technology development, in terms of both hardware and software. This is especially true in light microscopy, with recent improvements enabling sensitive, fast and high-resolution imaging of diverse samples. Existing resources developed by scientists can help researchers to navigate designing microscopy experiments1,2,3 and cover aspects such as sample preparation1, microscope usage1,4, method reporting5,6,7,8 and fluorophore and filter usage9,10. Despite widespread adoption of microscopy as a tool for biology and biomedical research, the resulting image figures in publications at times fail to fully communicate results or are not entirely understandable to audiences. This may be because authors do not include comprehensive imaging method statements11 or because they omit basic information in figures such as specimen size or color legends12, which are key to fully understanding the data. To ensure that images are presented in a clear, standardized, and reproducible manner, it is essential that the scientific community establishes unified and harmonized guidelines for image communication in publications.

Images document biological samples and the ranges of their phenotypes. Increasingly, microscopy images are also a source of quantitative biological data, and variables are measured with a growing number of open source image-analysis software packages such as Fiji–ImageJ13, CellProfiler14, KNIME15 and Python software libraries16 (https://scikit-image.org (ref. 17)) and commercial software packages such as ZEN, LAS X, NIS-Elements, Amira-Avizo, Imaris, Arivis Vision 4D and Huygens18. Image analysis is often a workflow of many steps, such as image reconstruction, segmentation, processing, rendering, visualization and statistical analysis, many of which require expert knowledge19,20. A comprehensive publication of quantitative image data should then include not only basic specimen and imaging information, but also the image-processing and analysis steps that produced the extracted data and statistics. Toward fully reproducible image analysis, it is also essential that images and workflows are available to the community, for example, in image repositories or archives21,22,23, and code repositories such as GitHub24.

To ensure that image figures provide insights to their readership, any supportive experimental metadata and image-analysis workflows must be clear and understandable (‘what is the pixel size?’, ‘what does the arrow mean?’), accessible (‘are colors visible to color-blind audiences?’), representative (no cherry picking) and reproducible (‘how were the data processed?’, ‘can one access and reanalyze the images?’). In the framework of the initiative for ‘Quality Assessment and Reproducibility for Instruments and Images in Light Microscopy’ (QUAREP-LiMi)25,26, the ‘Image Analysis and Visualization Working Group’ established community consensus checklists to help scientists to publish understandable and reproducible light microscopy images and image-analysis procedures. When applicable, the checklists are aligned with the FAIR (findability, accessibility, interoperability and reusability) principles, which were developed as recommendations for research data27.

Scope of checklists

The scope of the checklists is to help scientists to publish fully understandable and interpretable images and results from image analysis (Fig. 1). In this work, the term ‘images’ includes raw or essentially unprocessed light microscope data, compressed or reconstructed images and quantification results obtained through image analysis. While the focus of QUAREP-LiMi is on light microscopy in the life sciences, the principles may also apply to figures with other images (photos, electron micrographs, medical images) and to image data beyond the life sciences. The checklists are intended for all users, from beginners and laypersons who only occasionally work with light microscopy to experienced users and also experts, such as core facility staff or members of the bioimage community, who review image data or teach image processing.

Fig. 1: Scope of the checklists.
figure 1

The checklists present easy-to-use guidelines for publishing microscopy image figures and image-analysis workflows. The results may include images or measurements of images (for example cell area or cell circularity (circ.)).

The checklists do not include principles for designing imaging experiments or recommendations to avoid image manipulation. Previous literature covers experimental design for microscopy images, including accurate and reproducible image acquisition and ensuring image quality2,28, examples and recommendations for avoiding misleading images1,29,30,31,32,33, detection of image manipulation34,35,36,37, appropriate image handling and analysis7,19,38,39, guidelines for writing materials and methods sections for images40, and recommendations for general figure preparation41.

The checklists cover image (Fig. 2) and image-analysis (Fig. 3) publication and are structured into three levels that prioritize legibility and reproducibility.

  • The first reporting level (‘minimal’) describes necessary, non-negotiable requirements for the publication of image data (microscopy images, data obtained through image analysis). Scientists can use these minimal criteria to identify crucial gaps before publication.

  • The second reporting level (‘recommended’) defines measures to ensure the understandability of images and aims to reduce efforts toward evaluating image analysis. We encourage scientists to aim for the ‘recommended’ level as their image-publication goal. However, we acknowledge that some aspects (for example, large data in repositories) may be unattainable today for some authors.

  • The third reporting level (‘ideal’) includes recommendations that we encourage scientists to consider adopting in the future.

Fig. 2: Checklist for image publication.
figure 2

This checklist includes points to be addressed on image format, image colors and channels, image annotations and image availability. Min, minimum; max, maximum.

Fig. 3: Checklist for publication of image-analysis workflows.
figure 3

This checklist includes points to be addressed on established workflows, new workflows and machine learning workflows.

Checklists for image publication

Image formatting

After exploring, processing and analyzing image data, authors then communicate insights in publications with figures as visual evidence. Preparing a figure begins with the selection of representative images from the dataset that illustrate the message. When quantitative measurements are reported in a chart, an example of the input image should be shown; when ranges of phenotypes are described, several images may be necessary to illustrate diversity. To quickly focus the audience on key structures in the image, it is permitted to crop areas without data or with irrelevant data (Fig. 4a). As a rule, cropping, similar to selecting the field of view on the microscope, is allowed, as long as this does not change the meaning of the conveyed image.

Fig. 4: Image formatting, colors and channels.
figure 4

ac, Image formatting may include image cropping, rotation and resizing (a), image spacing in the figure (b) and presenting several magnifications (zoom, inset) of images (c). dg, Image colors and channels. d, Adjust brightness and/or contrast to achieve good visibility of the imaging signal. e, Channel information should be annotated and visible to audiences (high contrast to background color, visible to color-blind audiences). f, Image details are most unbiased in grayscale. g, It is best practice to publish legends for color intensities (intensity and/or calibration scales) with images, and this is recommended for pseudocolor scales.

Next, specimens are often presented in consensus orientation in figures (for example, apical side of cells upward, tree top upward), which may require image rotation. When such rotation is done in vector software, pixel interpolation is not necessary, as the square representing a pixel can be rotated and size changed as is, preventing any optical data modification. When rotation is done in pixel-based image-processing software, however, any rotation that is not a multiple of 90 degrees changes the intensity values through interpolation and therefore alters the information in the image31,42,43. The effect of interpolation, while it may be negligible in large images composed of many pixels, can greatly distort the information in small or zoomed images composed of fewer than 100 × 100 pixels.

When cropping and rotating, authors should ensure that the operation does not affect the original information contained in the image; while loss in image quality may be acceptable, quantifications, especially intensity measurements, should be performed beforehand39. In a figure, individual images should be well separated (spacing, border; Fig. 4b) to avoid misleading image splicing29,31.

When presenting two magnification views of the same image (for example, a full and a zoomed or inset view), the position of the inset in the full-view image should be made clear; if the inset is placed on top of the full-view image, for example, to save space, it should not obstruct key data (Fig. 4c). If an inset is digitally zoomed, the original pixels should not be interpolated but ‘resized’ to maintain the original resolution. Overall, the image should be sufficient in size so that audiences can identify all relevant details.

Image colors and channels

Fluorescent light microscopes use a range of wavelengths to generate images of specimens. In the images, the light intensity for individual wavelengths, most often in grayscale, is assigned or mapped to a visible color scheme. In multicolored images, several channels are overlaid to compare data from several channels.

Microscopy images often must be processed to adapt the bit depth to the visible range2,44. Usually, brightness and contrast are adjusted for each channel independently in many software platforms (for example, ImageJ–Fiji) by defining the minimum and maximum displayed intensity values before converting these into eight-bit images (for screen display, printing). Intensity-range adjustments should be monitored (for example, with the image histogram) and performed with care: too wide of an intensity range results in ‘faded’ images that lack details, while too narrow of an intensity range removes data (Fig. 4d). Scientists must be especially attentive with auto-contrast and auto-level, image-intensity normalization, non-linear adjustments (‘gamma’, histogram equalization and local contrast, for example, CLAHE45), image filters and image-restoration methods (for example, deconvolution, Noise2Void, CARE46,47,48,49, as their improper application may result in misleading images. When images are quantitatively compared in an experiment, the same adjustments and processing steps must be applied. If deemed critical for understanding the image data, advanced image-processing steps (for example, deconvolution, Noise2Void, CARE) may need to be indicated in the figure or figure legend, in addition to the materials and methods sections.

Next, image colors must be interpretable and accessible to readers and not misleading50. For full-color (for example, histology) images, the staining and preparation methods and, for fluorescence microscope images, channel-specific information (fluorophore or labeled biomolecule) should be annotated (Fig. 4e and Image annotation). In fluorescence microscope images, channels can be assigned a user-defined color scheme, often referred to as a lookup table (LUT), which should be chosen such that the imaged structures are easily distinguishable from the background and accessible to color-blind audiences12. Grayscale color schemes are best for single channels because they are uniformly perceived, allowing unbiased interpretations of intensity values in a given image. Inverting image LUTs to display intensities on a white instead of a black background may enhance signal contrast further, but be aware that different software handles this calculation differently.

A few steps may improve the understandability of colors overall. For multicolored fluorescent images, showing individual channels in separate, grayscale panels provides the best visibility and the highest contrast of detailed structures. Grayscale images are also accessible to all audiences, including color-blind persons (Fig. 4f). If several channels must be merged into one image, choose a color combination visible also to color-blind audiences12,51. A separate, linear-adjusted grayscale version may also help when images are adjusted with non-linear adjustments or pseudocolored LUTs (for example, ‘jet,’ ‘viridis’ and ‘Union Jack’), which map intensity values to a dual- or multiple-color sequential scheme. Annotation of intensity values with an intensity scale bar (‘calibration bar’) helps to orient readers and is essential for pseudocolored and non-linear color schemes (Fig. 4g). Calibration bars should indicate absolute intensity values to inform audiences about the displayed intensity range and can be prepared with Imaris and ImageJ–Fiji (ImageJ user guide: https://imagej.net/ij/docs/guide/).

Image annotation

Image acquisition must be described in detail in the methods section. Additionally, to best communicate image data, some of this information is required or at least beneficial if it is associated with the figure itself as a caption or an annotation. Light microscopy images show objects sized from submicrometer to millimeter resolution. As physical size is not obvious without context, annotating the scale for publication is therefore necessary. A scale bar of a given size (in or next to the image) is needed to orient audiences (Fig. 5a). The corresponding size statement or dimension, for example, ‘0.5 mm’, can be placed next to the scale bar (when not possible in the figure legend). To avoid quality changes (pixelated or illegible text) when adapting (resizing, compressing) figures for publication, annotations should be added as vector graphics. Statements about the physical dimensions of the entire image are acceptable alternatives to scale bars. Magnification statements should be avoided, as pixel size can be determined by a number of factors, for example, sampling rate or binning, and does not depend only on the objective magnification.

Fig. 5: Image annotation and availability.
figure 5

ad, Image annotation. a, Possible ways to provide scale information. b, Features in images can be annotated with symbols, letters or ROI. c,d, For advanced image publication, information on the anatomical view or intervals in time-lapses may be required. hh:mm, time in hours and minutes; 3D, three dimensional. For local storage, size limit is not applicable (N/A). eg, Image availability. e, Currently, image data are often shared ‘upon request’. f,g, More images along with the image metadata should be available for download in public databases (f) and, in the future, also should be archived in dedicated, added-value databases, in which images are machine searchable or curated (g).

Many images include further annotations such as symbols (arrows, asterisks), letter codes or regions of interest (ROI; dashed shapes), which must be explained in the figure or the figure legend (Fig. 5b). Symbols that resemble image data should be avoided, and note that symbols with clear vertical or horizontal arrangement are easier to distinguish than randomly oriented symbols on busy backgrounds (for example, ref. 12). At times, additional annotations may help readers to interpret the figure when the anatomical section of a three-dimensional object or the imaging frequency for time-lapse data is provided too (Fig. 5c,d). All annotations, including scale bars and ROI indications, must be legible, that is, have sufficient size or point size, line widths and colors in high contrast to the background. In addition to being legible, scale bars should have a meaningful length with regard to the object shown. Annotations placed on top of images should not obscure key image data and should be legible to color-blind persons.

Image availability

Image-processing operations should not overwrite the original microscope image42,43, and, upon publication, both the original image (or a lossless compressed version) and the image file shown in the published figure should be available. The specific file type of the original image depends on the microscope type and the vendor. The definition of ‘original data’ or ‘raw data’ and whether their storage is feasible depend on the specific microscopy technique. In data-heavy techniques (for example, light sheet microscopy) it may be acceptable if the cropped, binned or lossy compressed images, which faithfully capture the key scientific content, are made available. To retain the metadata, a conversion into standard formats or open formats such as OME-TIFF52, which support uncompressed and lossless but also lossy (compressed) files, is compatible with broad applications to allow reanalysis of image data. If only a compressed version may be kept (that is, a file in which image channels and annotations are irretrievably merged), PNG files are superior to the JPEG format, as they allow lossless compression43.

As a minimal requirement, the image files shown in figures or used for quantification should be available. When possible (see limitations above), lossless compressed files, which allow replication of the analysis workflow, should be shared or made available (Fig. 5e–g). We strongly discourage author statements that images ‘are available upon request’, as this has been shown to be inefficient53,54; however, at present, infrastructure is not sufficiently in place to ban this option. A clear advancement is depositing both the published and the original images in a public repository with an open license that permits reuse in the scientific context (CC-BY, CC0). Zenodo, Open Science Framework and figshare are currently essentially free options also for image data; however, some of these have file size limitations (Fig. 5e–g and Extended Data Fig. 1). OMERO servers (https://www.openmicroscopy.org/omero/institution/) enable institutions but also individual laboratories to host public or private (access-controlled) image-sharing databases (see Extended Data Fig. 1 for an overview of current repositories). In the long term (‘ideal’), uploading images with all experimental metadata to dedicated, specialized or fully searchable image databases has the potential to unlock the full power of image data for automated image and/or metadata searches and the possibility of image data reuse. Databases that allow such functionalities and more include the BioImage Archive (a primary repository that accepts any image data in publications; https://www.ebi.ac.uk/bioimage-archive/), the Image Data Resource (which publishes specific reference image datasets; https://idr.openmicroscopy.org/) and EMPIAR (a dedicated resource for electron microscopy datasets; https://www.ebi.ac.uk/empiar/). At present, most of the available options are free of charge for 20–50-GB datasets; however, dedicated image databases have strict requirements regarding file type and metadata (Fig. 5e–g and Extended Data Fig. 1). To be inclusive, we do not enforce the use of online repositories but require as a minimal measure that scientists are prepared to share image data. Costly and expert-driven storage solutions are at present not accessible to all scientists in the global imaging community.

Checklists for publication of image-analysis workflows

Image-analysis workflows usually combine several processing steps carried out in a specific sequence to mathematically transform the input image data into a result (that is, image for visualization or data for a plot; Fig. 6 and ref. 39). As images are numerical data, image processing invariably changes these data and thus needs to be transparently documented31,39,43. We developed separate checklists for scientists wishing to publish results originating from image-processing and image-analysis workflows (Fig. 3). Focusing on easy implementation of the checklists, we propose three categories:

  1. 1.

    Established workflows or workflow templates: workflows available in the scientific literature or well established in the respective fields.

  2. 2.

    New workflows: established or new image-analysis components (available in software platforms or libraries) are assembled by researchers into a new workflow.

  3. 3.

    Machine learning workflows: machine learning uses an extended technical terminology, and machine learning workflows that use deep neural networks (‘deep learning’) face unique challenges with respect to reproducibility. Given the rapid advancements in this field, we created a separate machine learning checklist.

Fig. 6: Image analysis.
figure 6

a, An established workflow template is applied on new image data to produce a result (plot). b, A new sequence of existing image-analysis components is assembled into a new workflow for a specific analysis (image segmentation). c, Machine learning workflows learn specific tasks from data, and the resulting model is applied to obtain results.

Established workflows

Examples of well-established workflows are published pipelines for CellProfiler (CellProfiler published pipelines, https://cellprofiler.org/published-pipelines; CellProfiler examples, https://cellprofiler.org/examples), workflows in KNIME55, specialized plugins and easy-to-use scripts in ImageJ56,57,58 and tools and plugins that solve generic image-analysis problems such as tracking59 or pixel classification60,61. For these workflows, extensive expertise, documentation and tutorials already exist that allow others (for example, reviewers, readers) to reproduce the workflow and to judge the validity of the results. Scientists publishing images or image-analysis results processed with established workflows thus can focus on documenting key parameters only.

Minimal

The authors must cite the used workflow. The specific software platform or library needs to be cited if the workflow is not available as a standalone tool. Key processing parameters must be reported. To validate the performance of the workflow and its settings, example input and output data must be provided. Any manual interventions (for example, ROI) must be documented. To ensure proper reproduction of the workflow, the precise version numbers of the workflow and the software platform used are vital and must be documented in the methods. If the used software does not allow the researcher to easily define and retrieve a specific version number, the exact version used must be deposited as a usable executable or code.

Recommended

Authors should state all settings in the methods or the supplementary material of the article. Providing data upon request is an ineffective method for data sharing54. Thus, authors should provide example input and output data and any manual ROI via a public repository (see Image availability).

Ideal

Documenting the use of software in the form of a screen recording or, in the case of command line tools, by reporting all executed commands in detail greatly facilitates understanding the workflow application and therefore reproduction. To avoid any variation arising from factors such as computer hardware or operating system, authors could provide cloud-hosted solutions62,63,64 (kiosk-imageJ-plugin, https://github.com/vanvalenlab/kiosk-imagej-plugin) or the workflow packaged in a software container (docker, https://www.docker.com/; Singularity, https://docs.sylabs.io/guides/3.5/user-guide/introduction.html)65.

New workflows

New image-analysis workflows assemble components into a new sequence, for example, a macro in Fiji, a pipeline in CellProfiler or a workflow in KNIME in an original way. To ensure reproducibility of the analysis, it is essential to report the specific composition and sequence of such new workflows.

Minimal

The individual components used in the new workflow must be cited, named and/or described in detail in the methods section along with the software platform used. It is essential that scientists specify or provide the exact software versions of the used components and software platform in the methods whenever possible. Authors must describe the sequence in which these components have been applied. Key settings (for example, settings that deviate from default settings) must be documented in the methods section. Finally, the developed workflow must be shared as code (for example, via code repositories such as https://github.com/) or pipelines (for example, KNIME workflow, CellProfiler pipeline). Example input and output and any manually generated inputs (that is, ROI) must be made available (Image availability). For new workflows that were created using software that does not allow scripting, the workflow steps must be carefully described with text.

Recommended

All settings of the workflow must be disclosed and described to help reproduction of the analysis. Example input, output and manual inputs (ROI) should be provided via public repositories such as Zenodo. The developer should describe the rationale as well as the limitations of the workflow and the used components in more detail in the methods or supplementary material. Whenever possible, evidence of the adequacy and efficiency of the used algorithms on the published data and potentially even comparisons to related established workflows facilitate such documentation.

Ideal

To further promote reproducibility, documentation such as a screen recording or a text-based tutorial of the application of the workflow should be added. To enable efficient reproduction of an analysis with a new workflow, authors should provide easy installs (for example, update sites, packages) or easy software reproduction (for example, via software containers) and easy-to-use user interfaces of software (that is, graphical user interfaces). The new workflow should also be published as independent methods papers with extensive documentation and online resources55,56,57,58,59,60,61. In sum, extensive documentation and ease of installation and use will ultimately contribute to the new workflow becoming well established and reproduced within the community (a future established and published workflow template)66.

Machine learning workflows

Machine learning and especially deep learning have recently become capable of surpassing the quality of results of even the most sophisticated conventional algorithms and workflows and are continuing to advance67. Deep learning procedures are quickly adapted to microscopy image tasks such as U-Net68 for cell segmentation69, Noise2Void for image reconstruction47, StarDist70,71 and Cellpose62 for instance segmentation, DeepProfiler72 for feature extraction and Piximi (https://www.piximi.app/) for image classification.

In machine learning workflows (supervised, unsupervised, self-supervised, shallow or deep learning), input image data are transformed by one or multiple distinct mathematical operations into a scientific result. The instructions for this transformation are learned from provided data (for example, labeled data for supervised learning and unlabeled data for unsupervised learning) to produce a machine learning model. However, the precise makeup of this model is not easily accessible to a user and depends strongly on the quality and nature of the supplied training data as well as on the specific training parameters. Biases in the training data and errors in the labels of ground truth for supervised machine learning will bias machine learning models73,74,75. Reporting is thus even more critical for reproducibility and understandability when machine learning applications are applied for image analysis.

Three major approaches are widely used in machine learning-based image analysis today that require different documentation. (1) Pretrained models are directly applied to new image data, and referral to existing references is sufficient. (2) Pretrained models are retrained (transfer learning) with new image data to improve the application, and, in this case, more information must be provided. (3) Models are trained de novo, in which case, extensive documentation is required for reproducibility.

Minimal (all models)

The precise machine learning method needs to be identifiable. Thus, the original method must be cited. At a minimum, access to the model that has been produced in the particular learning approach must be provided as well as validation input and output data. If a pretrained model has been used, it must be clearly identifiable. Example data or validation data must be provided; validation data must not be part of the training and testing data.

Recommended (retrained and new models)

To facilitate the reproduction and validation of results from either models trained from scratch or pretrained models that were retrained, the full training and testing data and any training metadata (for example, training time) should be made available. The code used for training the model should be provided. Code as well as data should be provided via public repositories (for example, Zenodo, GitHub). The authors should discuss and ideally test how well the model has performed and show any limitations of the used machine learning approach on their data. The application of machine learning models will particularly benefit from being deployed in a cloud-hosted format or via software containers.

Ideal (new models)

Further standardization promotes ease of reproduction and validation by the scientific community by making use of emerging online platforms. Thus, new models could be created conforming to standardized formats (for example, Model Zoo, https://bioimage.io/#/about) if they become more readily available in the future.

Discussion

Here, we have presented recommendations in the form of checklists to increase the understandability and reproducibility of published image figures and image analyses. While our checklists were initially intended for bioimages from light microscopes, we believe that their many principles are applicable more widely. Our checklists include recommendations for image formatting, annotation, color display and data availability, which, at the minimal level, can largely be achieved with commercial or open-source software (for example, ‘include scale bar’). Likewise, the minimal suggestions for image-analysis pipelines can be implemented readily with today’s options (for example, code repositories). We believe that, once included in microscopy core facility training and microscopy courses and introduced as guidelines from publishers, the recommendations will present no additional burden. On the contrary, transparent requirements for publishing images and progress-monitoring checklists will ease the path from starting a microscopy experiment to producing reproducible76, understandable image figures for all scientists.

Recommendations extending the ‘minimal’ level are introduced in the ‘recommended’ and ‘ideal’ reporting levels and at times go beyond what is easy to implement with standard tools today. They are meant to encourage continuous striving toward higher-quality standards in image publishing. Before all these advanced standards can become a new norm, technologies, software and research infrastructure must still be improved. At present, no image database is used widely enough to become a go-to solution, although dedicated resources exist and are slowly gaining traction and publishers are experimenting with parallel solutions (for example, EMBO source data). Also, while funding agencies increasingly require data to be deposited in repositories, few guidelines are provided for publishing terabytes to petabytes of raw data. While publishers may mandate data deposition or availability, they are not always reviewing its implementation. Combined with a lack of recognition of efforts put into publishing original image data, scientists are often discouraged to make data openly available. Commercial solutions for data storage are becoming increasingly available. For instance, Amazon Web Services Open Data has already been used to host image data (https://registry.opendata.aws/cellpainting-gallery/), and we believe that, ultimately, images presented in most publications should be linked to images processed by lossless compression that are amenable to reanalysis.

The checklists and recommendations for image analyses will naturally be dynamic and require regular updates to reflect new developments in this active research domain. Moreover, it is possible that generation of publication-quality images will also become a standardized ‘workflow’ in and of itself. It was previously suggested that images should be processed through scripting, with every step from microscope output to published figure stored in a metadata file39. Another challenge is the continuous availability of image-analysis software and workflows, which require software maintenance and updates to stay usable. Beyond technical developments, it is important to create inclusive standards for image publication that are achievable for our diverse global scientific community, which differs greatly in access to training and support, imaging infrastructure and imaging software. Our explicit intention is that the minimal level, which we believe must be met, does not pose an additional monetary or skill burden on scientists and is achievable with attention to detail. Our further reporting levels, ‘recommended’ and ‘ideal’, should encourage scientists to improve the accessibility and explanatory power of their images in publications.

We envision that the present checklists will be continuously updated by the scientific community and adapted to future requirements and unforeseen challenges. We are currently working on version 1.0 of a web-based Jupyter Book as a home for the ongoing development of ideas, extensions and also discussions on image publication. This consortium will continue future work in close alliance with similar initiatives such as NEUBIAS66,77, BINA and German BioImaging, initiatives that members of the authorship are involved with alongside their participation in QUAREP-LiMi. Collectively, we will work toward providing educational materials and tutorials based on the presented checklists and to continuously lobby to integrate its contents in general resources for better images78. We ask that all readers consider how their work will be seen and used in the future and join us in building a stronger scientific foundation for everyone. Version 1.0 of the presented checklists will already make images in publications more accessible, understandable and reproducible, providing a valuable resource as a solid foundation within today’s research that will benefit future science and scientists.