Inference methods

Sliding Window Inference

monai.inferers.sliding_window_inference(inputs, roi_size, sw_batch_size, predictor, overlap=0.25, mode=<BlendMode.CONSTANT: 'constant'>, sigma_scale=0.125, padding_mode=<PytorchPadMode.CONSTANT: 'constant'>, cval=0.0, device=None)[source]

Sliding window inference on inputs with predictor.

When roi_size is larger than the inputs’ spatial size, the input image are padded during inference. To maintain the same spatial sizes, the output image will be cropped to the original input size.

Parameters
  • inputs (Tensor) – input image to be processed (assuming NCHW[D])

  • roi_size (Union[Sequence[int], int]) – the spatial window size for inferences. When its components have None or non-positives, the corresponding inputs dimension will be used. if the components of the roi_size are non-positive values, the transform will use the corresponding components of img size. For example, roi_size=(32, -1) will be adapted to (32, 64) if the second spatial dimension size of img is 64.

  • sw_batch_size (int) – the batch size to run window slices.

  • predictor (Callable[[Tensor], Tensor]) – given input tensor patch_data in shape NCHW[D], predictor(patch_data) should return a prediction with the same spatial shape and batch_size, i.e. NMHW[D]; where HW[D] represents the patch spatial size, M is the number of output channels, N is sw_batch_size.

  • overlap (float) – Amount of overlap between scans.

  • mode (Union[BlendMode, str]) –

    {"constant", "gaussian"} How to blend output of overlapping windows. Defaults to "constant".

    • "constant”: gives equal weight to all predictions.

    • "gaussian”: gives less weight to predictions on edges of windows.

  • sigma_scale (Union[Sequence[float], float]) – the standard deviation coefficient of the Gaussian window when mode is "gaussian". Default: 0.125. Actual window sigma is sigma_scale * dim_size. When sigma_scale is a sequence of floats, the values denote sigma_scale at the corresponding spatial dimensions.

  • padding_mode (Union[PytorchPadMode, str]) – {"constant", "reflect", "replicate", "circular"} Padding mode for inputs, when roi_size is larger than inputs. Defaults to "constant" See also: https://pytorch.org/docs/stable/nn.functional.html#pad

  • cval (float) – fill value for ‘constant’ padding mode. Default: 0

  • device (Optional[device]) – device running the concatenation of the windows. By default the device and accordingly the memory of the input device is used. If for example set to device=torch.device(‘cpu’) the gpu memory consumption is less and independent of the input and roi_size parameter. Output is on the device set or if not set the inputs device.

Note

  • input must be channel-first and have a batch dim, supports N-D sliding window.

Return type

Tensor

Inferers

class monai.inferers.Inferer[source]

A base class for model inference. Extend this class to support operations during inference, e.g. a sliding window method.

abstract __call__(inputs, network)[source]

Run inference on inputs with the network model.

Parameters
  • inputs (Tensor) – input of the model inference.

  • network (Callable[[Tensor], Tensor]) – model for inference.

Raises

NotImplementedError – When the subclass does not override this method.

SimpleInferer

class monai.inferers.SimpleInferer[source]

SimpleInferer is the normal inference method that run model forward() directly.

__call__(inputs, network)[source]

Unified callable function API of Inferers.

Parameters
  • inputs (Tensor) – model input data for inference.

  • network (Callable[[Tensor], Tensor]) – target model to execute inference. supports callables such as lambda x: my_torch_model(x, additional_config)

SlidingWindowInferer

class monai.inferers.SlidingWindowInferer(roi_size, sw_batch_size=1, overlap=0.25, mode=<BlendMode.CONSTANT: 'constant'>, sigma_scale=0.125, padding_mode=<PytorchPadMode.CONSTANT: 'constant'>, cval=0.0)[source]

Sliding window method for model inference, with sw_batch_size windows for every model.forward().

Parameters
  • roi_size (Union[Sequence[int], int]) – the window size to execute SlidingWindow evaluation. If it has non-positive components, the corresponding inputs size will be used. if the components of the roi_size are non-positive values, the transform will use the corresponding components of img size. For example, roi_size=(32, -1) will be adapted to (32, 64) if the second spatial dimension size of img is 64.

  • sw_batch_size (int) – the batch size to run window slices.

  • overlap (float) – Amount of overlap between scans.

  • mode (Union[BlendMode, str]) –

    {"constant", "gaussian"} How to blend output of overlapping windows. Defaults to "constant".

    • "constant”: gives equal weight to all predictions.

    • "gaussian”: gives less weight to predictions on edges of windows.

  • sigma_scale (Union[Sequence[float], float]) – the standard deviation coefficient of the Gaussian window when mode is "gaussian". Default: 0.125. Actual window sigma is sigma_scale * dim_size. When sigma_scale is a sequence of floats, the values denote sigma_scale at the corresponding spatial dimensions.

  • padding_mode (Union[PytorchPadMode, str]) – {"constant", "reflect", "replicate", "circular"} Padding mode when roi_size is larger than inputs. Defaults to "constant" See also: https://pytorch.org/docs/stable/nn.functional.html#pad

  • cval (float) – fill value for ‘constant’ padding mode. Default: 0

Note

sw_batch_size denotes the max number of windows per network inference iteration, not the batch size of inputs.

__call__(inputs, network)[source]
Parameters
  • inputs (Tensor) – model input data for inference.

  • network (Callable[[Tensor], Tensor]) – target model to execute inference. supports callables such as lambda x: my_torch_model(x, additional_config)

Return type

Tensor