The standard patch-based approach crops the input image into smaller patches for training, and applies the algorithm in a sliding window across these patches for inference. This results in poor computational efficiency at inference from repeated computations across adjacent patches. Naive FCN eliminates the adjacent computations but causes the grid-like artifacts due to the mismatched context between training and inference. Modified FCN removes paddings in the network, ensuring consistent context between training and inference. This proves artifact-free inference results with no repeated computations between adjacent patches.