monailabel.tasks.infer.deepgrow_pipeline module

class monailabel.tasks.infer.deepgrow_pipeline.InferDeepgrowPipeline(path, model_3d, network=None, type='deepgrow', dimension=3, description='Combines Deepgrow 2D model with any 3D segmentation/deepgrow model', spatial_size=(256, 256), model_size=(256, 256), batch_size=32, min_point_density=10, max_random_points=10, random_point_density=1000, output_largest_cc=False)[source]

Bases: monailabel.interfaces.tasks.infer.InferTask

Parameters
  • path – Model File Path. Supports multiple paths to support versions (Last item will be picked as latest)

  • network – Model Network (e.g. monai.networks.xyz). None in case if you use TorchScript (torch.jit).

  • type – Type of Infer (segmentation, deepgrow etc..)

  • dimension – Input dimension

  • description – Description

  • model_state_dict – Key for loading the model state from checkpoint

  • input_key – Input key for running inference

  • output_label_key – Output key for storing result/label of inference

  • output_json_key – Output key for storing result/label of inference

  • config – K,V pairs to be part of user config

get_random_points(label)[source]
get_slices_points(label, initial_foreground)[source]
inferer()[source]

Provide Inferer Class

For Example:

return monai.inferers.SlidingWindowInferer(roi_size=[160, 160, 160])
post_transforms()[source]

Provide List of post-transforms

For Example:

return [
    monai.transforms.AddChanneld(keys='pred'),
    monai.transforms.Activationsd(keys='pred', softmax=True),
    monai.transforms.AsDiscreted(keys='pred', argmax=True),
    monai.transforms.SqueezeDimd(keys='pred', dim=0),
    monai.transforms.ToNumpyd(keys='pred'),
    monailabel.interface.utils.Restored(keys='pred', ref_image='image'),
    monailabel.interface.utils.ExtremePointsd(keys='pred', result='result', points='points'),
    monailabel.interface.utils.BoundingBoxd(keys='pred', result='result', bbox='bbox'),
]
pre_transforms()[source]

Provide List of pre-transforms

For Example:

return [
    monai.transforms.LoadImaged(keys='image'),
    monai.transforms.AddChanneld(keys='image'),
    monai.transforms.Spacingd(keys='image', pixdim=[1.0, 1.0, 1.0]),
    monai.transforms.ScaleIntensityRanged(keys='image',
        a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),
]
run_batch(run_inferer_method, batched_data, batched_slices, pred)[source]
run_inferer(data, convert_to_batch=True, device='cuda')[source]

Run Inferer over pre-processed Data. Derive this logic to customize the normal behavior. In some cases, you want to implement your own for running chained inferers over pre-processed data

Parameters
  • data – pre-processed data

  • convert_to_batch – convert input to batched input

  • device – device type run load the model and run inferer

Returns

updated data with output_key stored that will be used for post-processing