Inference methods¶
Sliding Window Inference¶

monai.inferers.
sliding_window_inference
(inputs, roi_size, sw_batch_size, predictor, overlap=0.25, mode=<BlendMode.CONSTANT: 'constant'>, padding_mode=<PytorchPadMode.CONSTANT: 'constant'>, cval=0.0)[source]¶ Sliding window inference on inputs with predictor.
When roi_size is larger than the inputs’ spatial size, the input image are padded during inference. To maintain the same spatial sizes, the output image will be cropped to the original input size.
 Parameters
inputs (
Tensor
) – input image to be processed (assuming NCHW[D])roi_size (
Union
[Sequence
[int
],int
]) – the spatial window size for inferences. When its components have None or nonpositives, the corresponding inputs dimension will be used. if the components of the roi_size are nonpositive values, the transform will use the corresponding components of img size. For example, roi_size=(32, 1) will be adapted to (32, 64) if the second spatial dimension size of img is 64.sw_batch_size (
int
) – the batch size to run window slices.predictor (
Callable
) – given input tensor patch_data in shape NCHW[D], predictor(patch_data) should return a prediction with the same spatial shape and batch_size, i.e. NMHW[D]; where HW[D] represents the patch spatial size, M is the number of output channels, N is sw_batch_size.overlap (
float
) – Amount of overlap between scans.mode (
Union
[BlendMode
,str
]) –{
"constant"
,"gaussian"
} How to blend output of overlapping windows. Defaults to"constant"
."constant
”: gives equal weight to all predictions."gaussian
”: gives less weight to predictions on edges of windows.
padding_mode (
Union
[PytorchPadMode
,str
]) – {"constant"
,"reflect"
,"replicate"
,"circular"
} Padding mode whenroi_size
is larger than inputs. Defaults to"constant"
See also: https://pytorch.org/docs/stable/nn.functional.html#padcval (
float
) – fill value for ‘constant’ padding mode. Default: 0
 Raises
NotImplementedError – When
inputs
does not have batch size = 1.
Note
input must be channelfirst and have a batch dim, support both spatial 2D and 3D.
currently only supports inputs with batch_size=1.
 Return type
Tensor
Inferers¶

class
monai.inferers.
Inferer
[source]¶ A base class for model inference. Extend this class to support operations during inference, e.g. a sliding window method.
SimpleInferer¶
SlidingWindowInferer¶

class
monai.inferers.
SlidingWindowInferer
(roi_size, sw_batch_size=1, overlap=0.25, mode=<BlendMode.CONSTANT: 'constant'>)[source]¶ Sliding window method for model inference, with sw_batch_size windows for every model.forward().
 Parameters
roi_size (
Union
[Sequence
[int
],int
]) – the window size to execute SlidingWindow evaluation. If it has nonpositive components, the corresponding inputs size will be used. if the components of the roi_size are nonpositive values, the transform will use the corresponding components of img size. For example, roi_size=(32, 1) will be adapted to (32, 64) if the second spatial dimension size of img is 64.sw_batch_size (
int
) – the batch size to run window slices.overlap (
float
) – Amount of overlap between scans.mode (
Union
[BlendMode
,str
]) –{
"constant"
,"gaussian"
} How to blend output of overlapping windows. Defaults to"constant"
."constant
”: gives equal weight to all predictions."gaussian
”: gives less weight to predictions on edges of windows.
Note
the “sw_batch_size” here is to run a batch of window slices of 1 input image, not batch size of input images.