monailabel.tasks.infer.basic_infer module¶
- class monailabel.tasks.infer.basic_infer.BasicInferTask(path, network, type, labels, dimension, description, model_state_dict='model', input_key='image', output_label_key='pred', output_json_key='result', config=None, load_strict=True, roi_size=None, preload=False, train_mode=False, skip_writer=False)[source]¶
Bases:
monailabel.interfaces.tasks.infer_v2.InferTask
Basic Inference Task Helper
- Parameters
path (
Union
[None
,str
,Sequence
[str
]]) – Model File Path. Supports multiple paths to support versions (Last item will be picked as latest)network (
Optional
[None
,Any
]) – Model Network (e.g. monai.networks.xyz). None in case if you use TorchScript (torch.jit).type (
Union
[str
,InferType
]) – Type of Infer (segmentation, deepgrow etc..)labels (
Union
[str
,None
,Sequence
[str
],Dict
[Any
,Any
]]) – Labels associated to this Inferdimension (
int
) – Input dimensiondescription (
str
) – Descriptionmodel_state_dict (
str
) – Key for loading the model state from checkpointinput_key (
str
) – Input key for running inferenceoutput_label_key (
str
) – Output key for storing result/label of inferenceoutput_json_key (
str
) – Output key for storing result/label of inferenceconfig (
Optional
[None
,Dict
[str
,Any
]]) – K,V pairs to be part of user configload_strict (
bool
) – Load model in strict moderoi_size – ROI size for scanning window inference
preload – Preload model/network on all available GPU devices
train_mode – Run in Train mode instead of eval (when network has dropouts)
skip_writer – Skip Writer and return data dictionary
- __init__(path, network, type, labels, dimension, description, model_state_dict='model', input_key='image', output_label_key='pred', output_json_key='result', config=None, load_strict=True, roi_size=None, preload=False, train_mode=False, skip_writer=False)[source]¶
- Parameters
path (
Union
[None
,str
,Sequence
[str
]]) – Model File Path. Supports multiple paths to support versions (Last item will be picked as latest)network (
Optional
[None
,Any
]) – Model Network (e.g. monai.networks.xyz). None in case if you use TorchScript (torch.jit).type (
Union
[str
,InferType
]) – Type of Infer (segmentation, deepgrow etc..)labels (
Union
[str
,None
,Sequence
[str
],Dict
[Any
,Any
]]) – Labels associated to this Inferdimension (
int
) – Input dimensiondescription (
str
) – Descriptionmodel_state_dict (
str
) – Key for loading the model state from checkpointinput_key (
str
) – Input key for running inferenceoutput_label_key (
str
) – Output key for storing result/label of inferenceoutput_json_key (
str
) – Output key for storing result/label of inferenceconfig (
Optional
[None
,Dict
[str
,Any
]]) – K,V pairs to be part of user configload_strict (
bool
) – Load model in strict moderoi_size – ROI size for scanning window inference
preload – Preload model/network on all available GPU devices
train_mode – Run in Train mode instead of eval (when network has dropouts)
skip_writer – Skip Writer and return data dictionary
- add_cache_transform(t, data, keys=('image', 'image_meta_dict'), hash_key=('image_path', 'model'))[source]¶
- inverse_transforms(data=None)[source]¶
Provide List of inverse-transforms. They are normally subset of pre-transforms. This task is performed on output_label (using the references from input_key)
- Parameters
data – current data dictionary/request which can be helpful to define the transforms per-request basis
- Return one of the following.
None: Return None to disable running any inverse transforms (default behavior).
Empty: Return [] to run all applicable pre-transforms which has inverse method
list: Return list of specific pre-transforms names/classes to run inverse method
For Example:
return [ monai.transforms.Spacingd, ]
- Return type
Optional
[None
,Sequence
[Callable
]]
- abstract post_transforms(data=None)[source]¶
Provide List of post-transforms
- Parameters
data –
current data dictionary/request which can be helpful to define the transforms per-request basis
For Example:
return [ monai.transforms.EnsureChannelFirstd(keys='pred', channel_dim='no_channel'), monai.transforms.Activationsd(keys='pred', softmax=True), monai.transforms.AsDiscreted(keys='pred', argmax=True), monai.transforms.SqueezeDimd(keys='pred', dim=0), monai.transforms.ToNumpyd(keys='pred'), monailabel.interface.utils.Restored(keys='pred', ref_image='image'), monailabel.interface.utils.ExtremePointsd(keys='pred', result='result', points='points'), monailabel.interface.utils.BoundingBoxd(keys='pred', result='result', bbox='bbox'), ]
- Return type
Sequence
[Callable
]
- abstract pre_transforms(data=None)[source]¶
Provide List of pre-transforms
- Parameters
data –
current data dictionary/request which can be helpful to define the transforms per-request basis
For Example:
return [ monai.transforms.LoadImaged(keys='image'), monai.transforms.EnsureChannelFirstd(keys='image', channel_dim='no_channel'), monai.transforms.Spacingd(keys='image', pixdim=[1.0, 1.0, 1.0]), monai.transforms.ScaleIntensityRanged(keys='image', a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True), ]
- Return type
Sequence
[Callable
]
- run_detector(data, convert_to_batch=True, device='cuda')[source]¶
Run Detector over pre-processed Data. Derive this logic to customize the normal behavior. In some cases, you want to implement your own for running chained inferers over pre-processed data
- Parameters
data (
Dict
[str
,Any
]) – pre-processed dataconvert_to_batch – convert input to batched input
device – device type run load the model and run inferer
- Returns
updated data with output_key stored that will be used for post-processing
- run_inferer(data, convert_to_batch=True, device='cuda')[source]¶
Run Inferer over pre-processed Data. Derive this logic to customize the normal behavior. In some cases, you want to implement your own for running chained inferers over pre-processed data
- Parameters
data (
Dict
[str
,Any
]) – pre-processed dataconvert_to_batch – convert input to batched input
device – device type run load the model and run inferer
- Returns
updated data with output_key stored that will be used for post-processing
- writer(data, extension=None, dtype=None)[source]¶
You can provide your own writer. However, this writer saves the prediction/label mask to file and fetches result json
- Parameters
data (
Dict
[str
,Any
]) – typically it is post processed dataextension – output label extension
dtype – output label dtype
- Return type
Tuple
[Any
,Any
]- Returns
tuple of output_file and result_json