# Inference methods¶

## Sliding Window Inference¶

monai.inferers.sliding_window_inference(inputs, roi_size, sw_batch_size, predictor, overlap=0.25, mode=<BlendMode.CONSTANT: 'constant'>, sigma_scale=0.125, padding_mode=<PytorchPadMode.CONSTANT: 'constant'>, cval=0.0, sw_device=None, device=None, *args, **kwargs)[source]

Sliding window inference on inputs with predictor.

When roi_size is larger than the inputs’ spatial size, the input image are padded during inference. To maintain the same spatial sizes, the output image will be cropped to the original input size.

Parameters
• inputs (Tensor) – input image to be processed (assuming NCHW[D])

• roi_size (Union[Sequence[int], int]) – the spatial window size for inferences. When its components have None or non-positives, the corresponding inputs dimension will be used. if the components of the roi_size are non-positive values, the transform will use the corresponding components of img size. For example, roi_size=(32, -1) will be adapted to (32, 64) if the second spatial dimension size of img is 64.

• sw_batch_size (int) – the batch size to run window slices.

• predictor (Callable[…, Tensor]) – given input tensor patch_data in shape NCHW[D], predictor(patch_data) should return a prediction with the same spatial shape and batch_size, i.e. NMHW[D]; where HW[D] represents the patch spatial size, M is the number of output channels, N is sw_batch_size.

• overlap (float) – Amount of overlap between scans.

• mode (Union[BlendMode, str]) –

{"constant", "gaussian"} How to blend output of overlapping windows. Defaults to "constant".

• "constant”: gives equal weight to all predictions.

• "gaussian”: gives less weight to predictions on edges of windows.

• sigma_scale (Union[Sequence[float], float]) – the standard deviation coefficient of the Gaussian window when mode is "gaussian". Default: 0.125. Actual window sigma is sigma_scale * dim_size. When sigma_scale is a sequence of floats, the values denote sigma_scale at the corresponding spatial dimensions.

• padding_mode (Union[PytorchPadMode, str]) – {"constant", "reflect", "replicate", "circular"} Padding mode for inputs, when roi_size is larger than inputs. Defaults to "constant" See also: https://pytorch.org/docs/stable/nn.functional.html#pad

• cval (float) – fill value for ‘constant’ padding mode. Default: 0

• sw_device (Union[device, str, None]) – device for the window data. By default the device (and accordingly the memory) of the inputs is used. Normally sw_device should be consistent with the device where predictor is defined.

• device (Union[device, str, None]) – device for the stitched output prediction. By default the device (and accordingly the memory) of the inputs is used. If for example set to device=torch.device(‘cpu’) the gpu memory consumption is less and independent of the inputs and roi_size. Output is on the device.

• args (Any) – optional args to be passed to predictor.

• kwargs (Any) – optional keyword args to be passed to predictor.

Note

• input must be channel-first and have a batch dim, supports N-D sliding window.

Return type

Tensor

## Inferers¶

class monai.inferers.Inferer[source]

A base class for model inference. Extend this class to support operations during inference, e.g. a sliding window method.

Example code:

device = torch.device("cuda:0")
model = UNet(...).to(device)
inferer = SlidingWindowInferer(...)

model.eval()
pred = inferer(inputs=data, network=model)
...

abstract __call__(inputs, network, *args, **kwargs)[source]

Run inference on inputs with the network model.

Parameters
• inputs (Tensor) – input of the model inference.

• network (Callable[…, Tensor]) – model for inference.

• args (Any) – optional args to be passed to network.

• kwargs (Any) – optional keyword args to be passed to network.

Raises

NotImplementedError – When the subclass does not override this method.

### SimpleInferer¶

class monai.inferers.SimpleInferer[source]

SimpleInferer is the normal inference method that run model forward() directly. Usage example can be found in the monai.inferers.Inferer base class.

__call__(inputs, network, *args, **kwargs)[source]

Unified callable function API of Inferers.

Parameters
• inputs (Tensor) – model input data for inference.

• network (Callable[…, Tensor]) – target model to execute inference. supports callables such as lambda x: my_torch_model(x, additional_config)

• args (Any) – optional args to be passed to network.

• kwargs (Any) – optional keyword args to be passed to network.

### SlidingWindowInferer¶

class monai.inferers.SlidingWindowInferer(roi_size, sw_batch_size=1, overlap=0.25, mode=<BlendMode.CONSTANT: 'constant'>, sigma_scale=0.125, padding_mode=<PytorchPadMode.CONSTANT: 'constant'>, cval=0.0, sw_device=None, device=None)[source]

Sliding window method for model inference, with sw_batch_size windows for every model.forward(). Usage example can be found in the monai.inferers.Inferer base class.

Parameters
• roi_size (Union[Sequence[int], int]) – the window size to execute SlidingWindow evaluation. If it has non-positive components, the corresponding inputs size will be used. if the components of the roi_size are non-positive values, the transform will use the corresponding components of img size. For example, roi_size=(32, -1) will be adapted to (32, 64) if the second spatial dimension size of img is 64.

• sw_batch_size (int) – the batch size to run window slices.

• overlap (float) – Amount of overlap between scans.

• mode (Union[BlendMode, str]) –

{"constant", "gaussian"} How to blend output of overlapping windows. Defaults to "constant".

• "constant”: gives equal weight to all predictions.

• "gaussian”: gives less weight to predictions on edges of windows.

• sigma_scale (Union[Sequence[float], float]) – the standard deviation coefficient of the Gaussian window when mode is "gaussian". Default: 0.125. Actual window sigma is sigma_scale * dim_size. When sigma_scale is a sequence of floats, the values denote sigma_scale at the corresponding spatial dimensions.

• padding_mode (Union[PytorchPadMode, str]) – {"constant", "reflect", "replicate", "circular"} Padding mode when roi_size is larger than inputs. Defaults to "constant" See also: https://pytorch.org/docs/stable/nn.functional.html#pad

• cval (float) – fill value for ‘constant’ padding mode. Default: 0

• sw_device (Union[device, str, None]) – device for the window data. By default the device (and accordingly the memory) of the inputs is used. Normally sw_device should be consistent with the device where predictor is defined.

• device (Union[device, str, None]) – device for the stitched output prediction. By default the device (and accordingly the memory) of the inputs is used. If for example set to device=torch.device(‘cpu’) the gpu memory consumption is less and independent of the inputs and roi_size. Output is on the device.

Note

sw_batch_size denotes the max number of windows per network inference iteration, not the batch size of inputs.

__call__(inputs, network, *args, **kwargs)[source]
Parameters
• inputs (Tensor) – model input data for inference.

• network (Callable[…, Tensor]) – target model to execute inference. supports callables such as lambda x: my_torch_model(x, additional_config)

• args (Any) – optional args to be passed to network.

• kwargs (Any) – optional keyword args to be passed to network.

Return type

Tensor