monailabel.scribbles.infer module¶
- class monailabel.scribbles.infer.GMMBasedGraphCut(dimension=3, description='A post processing step with GMM-based GraphCut for Generic segmentation', intensity_range=(- 300, 200, 0.0, 1.0, True), pix_dim=(2.5, 2.5, 5.0), lamda=1.0, sigma=0.1, num_mixtures=20, labels=None, config=None)[source]¶
Bases:
monailabel.scribbles.infer.ScribblesLikelihoodInferTask
Defines Gaussian Mixture Model (GMM) based task for Generic segmentation from the following papers:
Rother, Carsten, Vladimir Kolmogorov, and Andrew Blake. “” GrabCut” interactive foreground extraction using iterated graph cuts.” ACM transactions on graphics (TOG) 23.3 (2004): 309-314.
Wang, Guotai, et al. “Interactive medical image segmentation using deep learning with image-specific fine tuning.” IEEE transactions on medical imaging 37.7 (2018): 1562-1573. (preprint: https://arxiv.org/pdf/1710.04043.pdf)
This task takes as input 1) original image volume and 2) scribbles from user indicating foreground and background regions. A likelihood volume is generated using GMM method. User-scribbles are incorporated using Equation 7 on page 4 from Guotai et al.
numpymaxflow’s GraphCut layer is used to optimise Equation 5 from Guotai et al., where unaries come from Equation 7 and pairwise is the original input volume.
- Parameters
path – Model File Path. Supports multiple paths to support versions (Last item will be picked as latest)
network – Model Network (e.g. monai.networks.xyz). None in case if you use TorchScript (torch.jit).
type – Type of Infer (segmentation, deepgrow etc..)
labels – Labels associated to this Infer
dimension – Input dimension
description – Description
model_state_dict – Key for loading the model state from checkpoint
input_key – Input key for running inference
output_label_key – Output key for storing result/label of inference
output_json_key – Output key for storing result/label of inference
config – K,V pairs to be part of user config
load_strict – Load model in strict mode
roi_size – ROI size for scanning window inference
preload – Preload model/network on all available GPU devices
train_mode – Run in Train mode instead of eval (when network has dropouts)
skip_writer – Skip Writer and return data dictionary
- class monailabel.scribbles.infer.HistogramBasedGraphCut(dimension=3, description='A post processing step with histogram-based GraphCut for Generic segmentation', intensity_range=(- 300, 200, 0.0, 1.0, True), pix_dim=(2.5, 2.5, 5.0), lamda=1.0, sigma=0.1, num_bins=64, labels=None, config=None)[source]¶
Bases:
monailabel.scribbles.infer.ScribblesLikelihoodInferTask
Defines histogram-based GraphCut task for Generic segmentation from the following paper:
Wang, Guotai, et al. “Interactive medical image segmentation using deep learning with image-specific fine tuning.” IEEE transactions on medical imaging 37.7 (2018): 1562-1573. (preprint: https://arxiv.org/pdf/1710.04043.pdf)
This task takes as input 1) original image volume and 2) scribbles from user indicating foreground and background regions. A likelihood volume is generated using histogram method. User-scribbles are incorporated using Equation 7 on page 4 of the paper.
numpymaxflow’s GraphCut layer is used to optimise Equation 5 from the paper, where unaries come from Equation 7 and pairwise is the original input volume.
- Parameters
path – Model File Path. Supports multiple paths to support versions (Last item will be picked as latest)
network – Model Network (e.g. monai.networks.xyz). None in case if you use TorchScript (torch.jit).
type – Type of Infer (segmentation, deepgrow etc..)
labels – Labels associated to this Infer
dimension – Input dimension
description – Description
model_state_dict – Key for loading the model state from checkpoint
input_key – Input key for running inference
output_label_key – Output key for storing result/label of inference
output_json_key – Output key for storing result/label of inference
config – K,V pairs to be part of user config
load_strict – Load model in strict mode
roi_size – ROI size for scanning window inference
preload – Preload model/network on all available GPU devices
train_mode – Run in Train mode instead of eval (when network has dropouts)
skip_writer – Skip Writer and return data dictionary
- class monailabel.scribbles.infer.ScribblesLikelihoodInferTask(dimension=3, description='A post processing step with likelihood + GraphCut for Generic segmentation', intensity_range=(- 300, 200, 0.0, 1.0, True), pix_dim=(2.5, 2.5, 5.0), lamda=1.0, sigma=0.1, labels=None, config=None)[source]¶
Bases:
monailabel.tasks.infer.basic_infer.BasicInferTask
Defines a generic Scribbles Likelihood based segmentor infertask
- Parameters
path – Model File Path. Supports multiple paths to support versions (Last item will be picked as latest)
network – Model Network (e.g. monai.networks.xyz). None in case if you use TorchScript (torch.jit).
type – Type of Infer (segmentation, deepgrow etc..)
labels – Labels associated to this Infer
dimension – Input dimension
description – Description
model_state_dict – Key for loading the model state from checkpoint
input_key – Input key for running inference
output_label_key – Output key for storing result/label of inference
output_json_key – Output key for storing result/label of inference
config – K,V pairs to be part of user config
load_strict – Load model in strict mode
roi_size – ROI size for scanning window inference
preload – Preload model/network on all available GPU devices
train_mode – Run in Train mode instead of eval (when network has dropouts)
skip_writer – Skip Writer and return data dictionary
- post_transforms(data)[source]¶
Provide List of post-transforms
- Parameters
data –
current data dictionary/request which can be helpful to define the transforms per-request basis
For Example:
return [ monai.transforms.EnsureChannelFirstd(keys='pred', channel_dim='no_channel'), monai.transforms.Activationsd(keys='pred', softmax=True), monai.transforms.AsDiscreted(keys='pred', argmax=True), monai.transforms.SqueezeDimd(keys='pred', dim=0), monai.transforms.ToNumpyd(keys='pred'), monailabel.interface.utils.Restored(keys='pred', ref_image='image'), monailabel.interface.utils.ExtremePointsd(keys='pred', result='result', points='points'), monailabel.interface.utils.BoundingBoxd(keys='pred', result='result', bbox='bbox'), ]
- pre_transforms(data)[source]¶
Provide List of pre-transforms
- Parameters
data –
current data dictionary/request which can be helpful to define the transforms per-request basis
For Example:
return [ monai.transforms.LoadImaged(keys='image'), monai.transforms.EnsureChannelFirstd(keys='image', channel_dim='no_channel'), monai.transforms.Spacingd(keys='image', pixdim=[1.0, 1.0, 1.0]), monai.transforms.ScaleIntensityRanged(keys='image', a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True), ]