API Reference

MONAILabel APP

class monailabel.interfaces.app.MONAILabelApp(app_dir, studies, conf, name='', description='', version='2.0', labels=None)[source]

Default Pre-trained Path for downloading models

Base Class for Any MONAI Label App

Parameters
  • app_dir (str) – path for your App directory

  • studies (str) – path for studies/datalist

  • conf (Dict[str, str]) – dictionary of key/value pairs provided by user while running the app

__init__(app_dir, studies, conf, name='', description='', version='2.0', labels=None)[source]

Base Class for Any MONAI Label App

Parameters
  • app_dir (str) – path for your App directory

  • studies (str) – path for studies/datalist

  • conf (Dict[str, str]) – dictionary of key/value pairs provided by user while running the app

batch_infer(request, datastore=None)[source]

Run batch inference for an existing pre-trained model.

Parameters
  • request – JSON object which contains model, params and device

  • datastore

    Datastore object. If None then use default app level datastore to fetch the images

    For example:

    {
        "device": "cuda"
        "model": "segmentation_spleen",
        "images": "unlabeled",
        "label_tag": "original"
    }
    

Raises

MONAILabelException – When model is not found

Returns

JSON containing label and params

static deepgrow_infer_tasks(model_dir, pipeline=True)[source]

Dictionary of Default Infer Tasks for Deepgrow 2D/3D

infer(request, datastore=None)[source]

Run Inference for an exiting pre-trained model.

Parameters
  • request – JSON object which contains model, image, params and device

  • datastore

    Datastore object. If None then use default app level datastore to save labels if applicable

    For example:

    {
        "device": "cuda"
        "model": "segmentation_spleen",
        "image": "file://xyz",
        "save_label": "true/false",
        "label_tag": "original"
    }
    

Raises

MONAILabelException – When model is not found

Returns

JSON containing label and params

info()[source]

Provide basic information about APP. This information is passed to client.

next_sample(request)[source]

Run Active Learning selection. User APP has to implement this method to provide next sample for labelling.

Parameters

request

JSON object which contains active learning configs that are part APP info

For example:

{
    "strategy": "random"
}

Returns

JSON containing next image info that is selected for labeling

on_save_label(image_id, label_id)[source]

Callback method when label is saved into datastore by a remote client

scoring(request, datastore=None)[source]

Run scoring task over labels.

Parameters
  • request – JSON object which contains model, params and device

  • datastore

    Datastore object. If None then use default app level datastore to fetch the images

    For example:

    {
        "device": "cuda"
        "method": "dice",
        "y": "final",
        "y_pred": "original",
    }
    

Raises

MONAILabelException – When method is not found

Returns

JSON containing result of scoring method

train(request)[source]

Run Training. User APP has to implement this method to run training

Parameters

request

JSON object which contains train configs that are part APP info

For example:

{
    "mytrain": {
        "device": "cuda"
        "max_epochs": 1,
        "amp": False,
        "lr": 0.0001,
    }
}

Returns

JSON containing train stats

class monailabel.interfaces.datastore.Datastore[source]
abstract add_image(image_id, image_filename, image_info)[source]

Save a image for the given image id and return the newly saved image’s id

Parameters
  • image_id (str) – the image id for the image; If None then base filename will be used

  • image_filename (str) – the path to the image file

  • image_info (Dict[str, Any]) – additional info for the image

Return type

str

Returns

the image id for the saved image filename

abstract datalist()[source]

Return a dictionary of image and label pairs corresponding to the ‘image’ and ‘label’ keys respectively

Return type

List[Dict[str, str]]

Returns

the {‘label’: image, ‘label’: label} pairs for training

abstract description()[source]

Return the user-set description of the dataset

Return type

str

Returns

the user-set description of the dataset

abstract get_image(image_id)[source]

Retrieve image object based on image id

Parameters

image_id (str) – the desired image’s id

Return type

Any

Returns

return the “image”

abstract get_image_info(image_id)[source]

Get the image information for the given image id

Parameters

image_id (str) – the desired image id

Return type

Dict[str, Any]

Returns

image info as a list of dictionaries Dict[str, Any]

abstract get_image_uri(image_id)[source]

Retrieve image uri based on image id

Parameters

image_id (str) – the desired image’s id

Return type

str

Returns

return the image uri

abstract get_label(label_id, label_tag)[source]

Retrieve image object based on label id

Parameters
  • label_id (str) – the desired label’s id

  • label_tag (str) – the matching label’s tag

Return type

Any

Returns

return the “label”

abstract get_label_by_image_id(image_id, tag)[source]

Retrieve label id for the given image id and tag

Parameters
  • image_id (str) – the desired image’s id

  • tag (str) – matching tag name

Return type

str

Returns

label id

abstract get_label_info(label_id, label_tag)[source]

Get the label information for the given label id

Parameters
  • label_id (str) – the desired label id

  • label_tag (str) – the matching label tag

Return type

Dict[str, Any]

Returns

label info as a list of dictionaries Dict[str, Any]

abstract get_label_uri(label_id, label_tag)[source]

Retrieve label uri based on image id

Parameters
  • label_id (str) – the desired label’s id

  • label_tag (str) – the matching label’s tag

Return type

str

Returns

return the label uri

abstract get_labeled_images()[source]

Get all images that have a corresponding final label

Return type

List[str]

Returns

list of image ids List[str]

abstract get_labels_by_image_id(image_id)[source]

Retrieve all label ids for the given image id

Parameters

image_id (str) – the desired image’s id

Return type

Dict[str, str]

Returns

label ids mapped to the appropriate LabelTag as Dict[LabelTag, str]

abstract get_unlabeled_images()[source]

Get all images that have no corresponding final label

Return type

List[str]

Returns

list of image ids List[str]

abstract json()[source]

Return json representation of datastore

abstract list_images()[source]

Return list of image ids available in the datastore

Return type

List[str]

Returns

list of image ids List[str]

abstract name()[source]

Return the human-readable name of the datastore

Return type

str

Returns

the name of the dataset

abstract refresh()[source]

Refresh the datastore

Return type

None

abstract remove_image(image_id)[source]

Remove image for the datastore. This will also remove all associated labels.

Parameters

image_id (str) – the image id for the image to be removed from datastore

Return type

None

abstract remove_label(label_id, label_tag)[source]

Remove label from the datastore

Parameters
  • label_id (str) – the label id for the label to be removed from datastore

  • label_tag (str) – the label tag for the label to be removed from datastore

Return type

None

abstract save_label(image_id, label_filename, label_tag, label_info)[source]

Save a label for the given image id and return the newly saved label’s id

Parameters
  • image_id (str) – the image id for the label

  • label_filename (str) – the path to the label file

  • label_tag (str) – the user-provided tag for the label

  • label_info (Dict[str, Any]) – additional info for the label

Return type

str

Returns

the label id for the given label filename

abstract set_description(description)[source]

A human-readable description of the datastore

Parameters

description (str) – string for description

abstract set_name(name)[source]

Set the name of the datastore

Parameters

name (str) – a human-readable name for the datastore

abstract status()[source]

Return current statistics of datastore

Return type

Dict[str, Any]

abstract update_image_info(image_id, info)[source]

Update (or create a new) info tag for the desired image

Parameters
  • image_id (str) – the id of the image we want to add/update info

  • info (Dict[str, Any]) – a dictionary of custom image information Dict[str, Any]

Return type

None

abstract update_label_info(label_id, label_tag, info)[source]

Update (or create a new) info tag for the desired label

Parameters
  • label_id (str) – the id of the label we want to add/update info

  • label_tag (str) – the matching label tag

  • info (Dict[str, Any]) – a dictionary of custom label information Dict[str, Any]

Return type

None

class monailabel.interfaces.exception.MONAILabelError(value)[source]
SERVER_ERROR -            Server Error
UNKNOWN_ERROR -           Unknown Error
CLASS_INIT_ERROR -        Class Initialization Error
MODEL_IMPORT_ERROR -      Model Import Error
INFERENCE_ERROR -         Inference Error
TRANSFORM_ERROR -         Transform Error
APP_INIT_ERROR -          Initialization Error
APP_INFERENCE_FAILED -    Inference Failed
APP_TRAIN_FAILED -        Train Failed
APP_ERROR APP -           General Error
class monailabel.interfaces.exception.MONAILabelException(error, msg)[source]

MONAI Label Exception

Tasks

class monailabel.interfaces.tasks.infer.InferType[source]

Type of Inference Model

SEGMENTATION -            Segmentation Model
CLASSIFICATION -          Classification Model
DEEPGROW -                Deepgrow Interactive Model
DEEPEDIT -                DeepEdit Interactive Model
SCRIBBLES -               Scribbles Model
OTHERS -                  Other Model Type
class monailabel.interfaces.tasks.infer.InferTask(path, network, type, labels, dimension, description, model_state_dict='model', input_key='image', output_label_key='pred', output_json_key='result', config=None)[source]

Basic Inference Task Helper

Parameters
  • path – Model File Path. Supports multiple paths to support versions (Last item will be picked as latest)

  • network – Model Network (e.g. monai.networks.xyz). None in case if you use TorchScript (torch.jit).

  • type (InferType) – Type of Infer (segmentation, deepgrow etc..)

  • dimension – Input dimension

  • description – Description

  • model_state_dict – Key for loading the model state from checkpoint

  • input_key – Input key for running inference

  • output_label_key – Output key for storing result/label of inference

  • output_json_key – Output key for storing result/label of inference

  • config – K,V pairs to be part of user config

__init__(path, network, type, labels, dimension, description, model_state_dict='model', input_key='image', output_label_key='pred', output_json_key='result', config=None)[source]
Parameters
  • path – Model File Path. Supports multiple paths to support versions (Last item will be picked as latest)

  • network – Model Network (e.g. monai.networks.xyz). None in case if you use TorchScript (torch.jit).

  • type (InferType) – Type of Infer (segmentation, deepgrow etc..)

  • dimension – Input dimension

  • description – Description

  • model_state_dict – Key for loading the model state from checkpoint

  • input_key – Input key for running inference

  • output_label_key – Output key for storing result/label of inference

  • output_json_key – Output key for storing result/label of inference

  • config – K,V pairs to be part of user config

abstract inferer()[source]

Provide Inferer Class

For Example:

return monai.inferers.SlidingWindowInferer(roi_size=[160, 160, 160])
inverse_transforms()[source]

Provide List of inverse-transforms. They are normally subset of pre-transforms. This task is performed on output_label (using the references from input_key)

Return one of the following.
  • None: Return None to disable running any inverse transforms (default behavior).

  • Empty: Return [] to run all applicable pre-transforms which has inverse method

  • list: Return list of specific pre-transforms names/classes to run inverse method

For Example:

return [
    monai.transforms.Spacingd,
]
abstract post_transforms()[source]

Provide List of post-transforms

For Example:

return [
    monai.transforms.AddChanneld(keys='pred'),
    monai.transforms.Activationsd(keys='pred', softmax=True),
    monai.transforms.AsDiscreted(keys='pred', argmax=True),
    monai.transforms.SqueezeDimd(keys='pred', dim=0),
    monai.transforms.ToNumpyd(keys='pred'),
    monailabel.interface.utils.Restored(keys='pred', ref_image='image'),
    monailabel.interface.utils.ExtremePointsd(keys='pred', result='result', points='points'),
    monailabel.interface.utils.BoundingBoxd(keys='pred', result='result', bbox='bbox'),
]
abstract pre_transforms()[source]

Provide List of pre-transforms

For Example:

return [
    monai.transforms.LoadImaged(keys='image'),
    monai.transforms.AddChanneld(keys='image'),
    monai.transforms.Spacingd(keys='image', pixdim=[1.0, 1.0, 1.0]),
    monai.transforms.ScaleIntensityRanged(keys='image',
        a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),
]
run_inferer(data, convert_to_batch=True, device='cuda')[source]

Run Inferer over pre-processed Data. Derive this logic to customize the normal behavior. In some cases, you want to implement your own for running chained inferers over pre-processed data

Parameters
  • data – pre-processed data

  • convert_to_batch – convert input to batched input

  • device – device type run load the model and run inferer

Returns

updated data with output_key stored that will be used for post-processing

writer(data, extension=None, dtype=None)[source]

You can provide your own writer. However this writer saves the prediction/label mask to file and fetches result json

Parameters
  • data – typically it is post processed data

  • extension – output label extension

  • dtype – output label dtype

Returns

tuple of output_file and result_json

class monailabel.interfaces.tasks.train.TrainTask(description)[source]

Basic Train Task

class monailabel.interfaces.tasks.strategy.Strategy(description)[source]

Basic Active Learning Strategy

class monailabel.interfaces.tasks.scoring.ScoringMethod(description)[source]

Basic Scoring Method

Utils

class monailabel.tasks.train.basic_train.BasicTrainTask(model_dir, description=None, config=None, amp=True, load_path=None, load_dict=None, publish_path=None, stats_path=None, train_save_interval=50, val_interval=1, final_filename='checkpoint_final.pt', key_metric_filename='model.pt')[source]

This provides Basic Train Task to train a model using SupervisedTrainer and SupervisedEvaluator from MONAI

Parameters
  • model_dir – Base Model Dir to save the model checkpoints, events etc…

  • description – Description for this task

  • config – K,V pairs to be part of user config

  • amp – Enable AMP for training

  • load_path – Initialize model from existing checkpoint (pre-trained)

  • load_dict – Provide dictionary to load from checkpoint. If None, then net will be loaded

  • publish_path – Publish path for best trained model (based on best key metric)

  • stats_path – Path to save the train stats

  • train_save_interval – checkpoint save interval for training

  • val_interval – validation interval (run every x epochs)

  • final_filename – name of final checkpoint that will be saved

  • key_metric_filename – best key metric model file name

__init__(model_dir, description=None, config=None, amp=True, load_path=None, load_dict=None, publish_path=None, stats_path=None, train_save_interval=50, val_interval=1, final_filename='checkpoint_final.pt', key_metric_filename='model.pt')[source]
Parameters
  • model_dir – Base Model Dir to save the model checkpoints, events etc…

  • description – Description for this task

  • config – K,V pairs to be part of user config

  • amp – Enable AMP for training

  • load_path – Initialize model from existing checkpoint (pre-trained)

  • load_dict – Provide dictionary to load from checkpoint. If None, then net will be loaded

  • publish_path – Publish path for best trained model (based on best key metric)

  • stats_path – Path to save the train stats

  • train_save_interval – checkpoint save interval for training

  • val_interval – validation interval (run every x epochs)

  • final_filename – name of final checkpoint that will be saved

  • key_metric_filename – best key metric model file name