monailabel.tasks.infer.bundle module

class monailabel.tasks.infer.bundle.BundleConstants[source]

Bases: object

configs()[source]
Return type

Sequence[str]

key_bundle_root()[source]
Return type

str

key_detector()[source]
Return type

Sequence[str]

key_detector_ops()[source]
Return type

Sequence[str]

key_device()[source]
Return type

str

key_displayable_configs()[source]
Return type

Sequence[str]

key_inferer()[source]
Return type

Sequence[str]

key_network_def()[source]
Return type

str

key_postprocessing()[source]
Return type

Sequence[str]

key_preprocessing()[source]
Return type

Sequence[str]

metadata_json()[source]
Return type

str

model_pytorch()[source]
Return type

str

model_torchscript()[source]
Return type

str

class monailabel.tasks.infer.bundle.BundleInferTask(path, conf, const=None, type='', pre_filter=None, post_filter=[<class 'monai.transforms.io.dictionary.SaveImaged'>], extend_load_image=True, add_post_restore=True, dropout=0.0, load_strict=False, **kwargs)[source]

Bases: monailabel.tasks.infer.basic_infer.BasicInferTask

This provides Inference Engine for Monai Bundle.

Parameters
  • path (str) – Model File Path. Supports multiple paths to support versions (Last item will be picked as latest)

  • network – Model Network (e.g. monai.networks.xyz). None in case if you use TorchScript (torch.jit).

  • type (Union[str, InferType]) – Type of Infer (segmentation, deepgrow etc..)

  • labels – Labels associated to this Infer

  • dimension – Input dimension

  • description – Description

  • model_state_dict – Key for loading the model state from checkpoint

  • input_key – Input key for running inference

  • output_label_key – Output key for storing result/label of inference

  • output_json_key – Output key for storing result/label of inference

  • config – K,V pairs to be part of user config

  • load_strict – Load model in strict mode

  • roi_size – ROI size for scanning window inference

  • preload – Preload model/network on all available GPU devices

  • train_mode – Run in Train mode instead of eval (when network has dropouts)

  • skip_writer – Skip Writer and return data dictionary

detector(data=None)[source]
Return type

Optional[Callable, None]

inferer(data=None)[source]
Return type

Inferer

info()[source]
Return type

Dict[str, Any]

is_valid()[source]
Return type

bool

post_transforms(data=None)[source]

Provide List of post-transforms

Parameters

data

current data dictionary/request which can be helpful to define the transforms per-request basis

For Example:

return [
    monai.transforms.EnsureChannelFirstd(keys='pred', channel_dim='no_channel'),
    monai.transforms.Activationsd(keys='pred', softmax=True),
    monai.transforms.AsDiscreted(keys='pred', argmax=True),
    monai.transforms.SqueezeDimd(keys='pred', dim=0),
    monai.transforms.ToNumpyd(keys='pred'),
    monailabel.interface.utils.Restored(keys='pred', ref_image='image'),
    monailabel.interface.utils.ExtremePointsd(keys='pred', result='result', points='points'),
    monailabel.interface.utils.BoundingBoxd(keys='pred', result='result', bbox='bbox'),
]

Return type

Sequence[Callable]

pre_transforms(data=None)[source]

Provide List of pre-transforms

Parameters

data

current data dictionary/request which can be helpful to define the transforms per-request basis

For Example:

return [
    monai.transforms.LoadImaged(keys='image'),
    monai.transforms.EnsureChannelFirstd(keys='image', channel_dim='no_channel'),
    monai.transforms.Spacingd(keys='image', pixdim=[1.0, 1.0, 1.0]),
    monai.transforms.ScaleIntensityRanged(keys='image',
        a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),
]

Return type

Sequence[Callable]