Model Bundle#

Config Item#

class monai.bundle.Instantiable[source]#

Base class for an instantiable object.

abstract instantiate(*args, **kwargs)[source]#

Instantiate the target component and return the instance.

Return type:

object

abstract is_disabled(*args, **kwargs)[source]#

Return a boolean flag to indicate whether the object should be instantiated.

Return type:

bool

class monai.bundle.ComponentLocator(excludes=None)[source]#

Scan all the available classes and functions in the MONAI package and map them with the module paths in a table. It’s used to locate the module path for provided component name.

Parameters:

excludes – if any string of the excludes exists in the full module name, don’t import this module.

get_component_module_name(name)[source]#

Get the full module name of the class or function with specified name. If target component name exists in multiple packages or modules, return a list of full module names.

Parameters:

name – name of the expected class or function.

class monai.bundle.ConfigComponent(config, id='', locator=None, excludes=None)[source]#

Subclass of monai.bundle.ConfigItem, this class uses a dictionary with string keys to represent a component of class or function and supports instantiation.

Currently, three special keys (strings surrounded by _) are defined and interpreted beyond the regular literals:

  • class or function identifier of the python module, specified by "_target_", indicating a monai built-in Python class or function such as "LoadImageDict", or a full module name, e.g. "monai.transforms.LoadImageDict", or a callable, e.g. "$@model.forward".

  • "_requires_" (optional): specifies reference IDs (string starts with "@") or ConfigExpression of the dependencies for this ConfigComponent object. These dependencies will be evaluated/instantiated before this object is instantiated. It is useful when the component doesn’t explicitly depend on the other ConfigItems via its arguments, but requires the dependencies to be instantiated/evaluated beforehand.

  • "_disabled_" (optional): a flag to indicate whether to skip the instantiation.

  • "_desc_" (optional): free text descriptions of the component for code readability.

  • "_mode_" (optional): operating mode for invoking the callable component defined by "_target_":

    • "default": returns component(**kwargs)

    • "callable": returns component or, if kwargs are provided, functools.partial(component, **kwargs)

    • "debug": returns pdb.runcall(component, **kwargs)

Other fields in the config content are input arguments to the python module.

from monai.bundle import ComponentLocator, ConfigComponent

locator = ComponentLocator(excludes=["modules_to_exclude"])
config = {
    "_target_": "LoadImaged",
    "keys": ["image", "label"]
}

configer = ConfigComponent(config, id="test", locator=locator)
image_loader = configer.instantiate()
print(image_loader)  # <monai.transforms.io.dictionary.LoadImaged object at 0x7fba7ad1ee50>
Parameters:
  • config – content of a config item.

  • id – name of the current config item, defaults to empty string.

  • locator – a ComponentLocator to convert a module name string into the actual python module. if None, a ComponentLocator(excludes=excludes) will be used.

  • excludes – if locator is None, create a new ComponentLocator with excludes. See also: monai.bundle.ComponentLocator.

instantiate(**kwargs)[source]#

Instantiate component based on self.config content. The target component must be a class or a function, otherwise, return None.

Parameters:

kwargs (Any) – args to override / add the config args when instantiation.

Return type:

object

is_disabled()[source]#

Utility function used in instantiate() to check whether to skip the instantiation.

Return type:

bool

static is_instantiable(config)[source]#

Check whether this config represents a class or function that is to be instantiated.

Parameters:

config (Any) – input config content to check.

Return type:

bool

resolve_args()[source]#

Utility function used in instantiate() to resolve the arguments from current config content.

resolve_module_name()[source]#

Resolve the target module name from current config content. The config content must have "_target_" key.

class monai.bundle.ConfigExpression(config, id='', globals=None)[source]#

Subclass of monai.bundle.ConfigItem, the ConfigItem represents an executable expression (execute based on eval(), or import the module to the globals if it’s an import statement).

For example:

import monai
from monai.bundle import ConfigExpression

config = "$monai.__version__"
expression = ConfigExpression(config, id="test", globals={"monai": monai})
print(expression.evaluate())
Parameters:
  • config – content of a config item.

  • id – name of current config item, defaults to empty string.

  • globals – additional global context to evaluate the string.

evaluate(globals=None, locals=None)[source]#

Execute the current config content and return the result if it is expression, based on Python eval(). For more details: https://docs.python.org/3/library/functions.html#eval.

Parameters:
  • globals – besides self.globals, other global symbols used in the expression at runtime.

  • locals – besides globals, may also have some local symbols used in the expression at runtime.

classmethod is_expression(config)[source]#

Check whether the config is an executable expression string. Currently, a string starts with "$" character is interpreted as an expression.

Parameters:

config – input config content to check.

classmethod is_import_statement(config)[source]#

Check whether the config is an import statement (a special case of expression).

Parameters:

config – input config content to check.

class monai.bundle.ConfigItem(config, id='')[source]#

Basic data structure to represent a configuration item.

A ConfigItem instance can optionally have a string id, so that other items can refer to it. It has a build-in config property to store the configuration object.

Parameters:
  • config (Any) – content of a config item, can be objects of any types, a configuration resolver may interpret the content to generate a configuration object.

  • id (str) – name of the current config item, defaults to empty string.

get_config()[source]#

Get the config content of current config item.

get_id()[source]#

Get the ID name of current config item, useful to identify config items during parsing.

Return type:

str

update_config(config)[source]#

Replace the content of self.config with new config. A typical usage is to modify the initial config content at runtime.

Parameters:

config (Any) – content of a ConfigItem.

Return type:

None

Reference Resolver#

class monai.bundle.ReferenceResolver(items=None)[source]#

Utility class to manage a set of ConfigItem and resolve the references between them.

This class maintains a set of ConfigItem objects and their associated IDs. The IDs must be unique within this set. A string in ConfigItem starting with @ will be treated as a reference to other ConfigItem objects by ID. Since ConfigItem may have a nested dictionary or list structure, the reference string may also contain the separator :: to refer to a substructure by key indexing for a dictionary or integer indexing for a list.

In this class, resolving references is essentially substitution of the reference strings with the corresponding python objects. A typical workflow of resolving references is as follows:

  • Add multiple ConfigItem objects to the ReferenceResolver by add_item().

  • Call get_resolved_content() to automatically resolve the references. This is done (recursively) by:
    • Convert the items to objects, for those do not have references to other items.
      • If it is instantiable, instantiate it and cache the class instance in resolved_content.

      • If it is an expression, evaluate it and save the value in resolved_content.

    • Substitute the reference strings with the corresponding objects.

Parameters:

itemsConfigItem``s to resolve, this could be added later with ``add_item().

add_item(item)[source]#

Add a ConfigItem to the resolver.

Parameters:

item (ConfigItem) – a ConfigItem.

Return type:

None

classmethod find_refs_in_config(config, id, refs=None)[source]#

Recursively search all the content of input config item to get the ids of references. References mean: the IDs of other config items ("@XXX" in this config item), or the sub-item in the config is instantiable, or the sub-item in the config is expression. For dict and list, recursively check the sub-items.

Parameters:
  • config – input config content to search.

  • id – ID name for the input config item.

  • refs – dict of the ID name and count of found references, default to None.

get_item(id, resolve=False, **kwargs)[source]#

Get the ConfigItem by id.

If resolve=True, the returned item will be resolved, that is, all the reference strings are substituted by the corresponding ConfigItem objects.

Parameters:
  • id – id of the expected config item.

  • resolve – whether to resolve the item if it is not resolved, default to False.

  • kwargs – keyword arguments to pass to _resolve_one_item(). Currently support instantiate and eval_expr. Both are defaulting to True.

get_resolved_content(id, **kwargs)[source]#

Get the resolved ConfigItem by id.

Parameters:
  • id – id name of the expected item.

  • kwargs – keyword arguments to pass to _resolve_one_item(). Currently support instantiate, eval_expr and default. instantiate and eval_expr are defaulting to True, default is the target config item if the id is not in the config content, must be a ConfigItem object.

classmethod iter_subconfigs(id, config)[source]#

Iterate over the sub-configs of the input config, the output sub_id uses cls.sep to denote substructure.

Parameters:
  • id (str) – id string of the current input config.

  • config (Any) – input config to be iterated.

Return type:

Iterator[tuple[str, str, Any]]

classmethod match_refs_pattern(value)[source]#

Match regular expression for the input string to find the references. The reference string starts with "@", like: "@XXX::YYY::ZZZ".

Parameters:

value (str) – input value to match regular expression.

Return type:

dict[str, int]

classmethod normalize_id(id)[source]#

Normalize the id string to consistently use cls.sep.

Parameters:

id – id string to be normalized.

reset()[source]#

Clear all the added ConfigItem and all the resolved content.

classmethod split_id(id, last=False)[source]#

Split the id string into a list of strings by cls.sep.

Parameters:
  • id – id string to be split.

  • last – whether to split the rightmost part of the id. default is False (split all parts).

classmethod update_config_with_refs(config, id, refs=None)[source]#

With all the references in refs, update the input config content with references and return the new config.

Parameters:
  • config – input config content to update.

  • id – ID name for the input config.

  • refs – all the referring content with ids, default to None.

classmethod update_refs_pattern(value, refs)[source]#

Match regular expression for the input string to update content with the references. The reference part starts with "@", like: "@XXX::YYY::ZZZ". References dictionary must contain the referring IDs as keys.

Parameters:
  • value (str) – input value to match regular expression.

  • refs (dict) – all the referring components with ids as keys, default to None.

Return type:

str

Config Parser#

class monai.bundle.ConfigParser(config=None, excludes=None, globals=None)[source]#

The primary configuration parser. It traverses a structured config (in the form of nested Python dict or list), creates ConfigItem, and assign unique IDs according to the structures.

This class provides convenient access to the set of ConfigItem of the config by ID. A typical workflow of config parsing is as follows:

  • Initialize ConfigParser with the config source.

  • Call get_parsed_content() to get expected component with id.

from monai.bundle import ConfigParser

config = {
    "my_dims": 2,
    "dims_1": "$@my_dims + 1",
    "my_xform": {"_target_": "LoadImage"},
    "my_net": {"_target_": "BasicUNet", "spatial_dims": "@dims_1", "in_channels": 1, "out_channels": 4},
    "trainer": {"_target_": "SupervisedTrainer", "network": "@my_net", "preprocessing": "@my_xform"}
}
# in the example $@my_dims + 1 is an expression, which adds 1 to the value of @my_dims
parser = ConfigParser(config)

# get/set configuration content, the set method should happen before calling parse()
print(parser["my_net"]["in_channels"])  # original input channels 1
parser["my_net"]["in_channels"] = 4  # change input channels to 4
print(parser["my_net"]["in_channels"])

# instantiate the network component
parser.parse(True)
net = parser.get_parsed_content("my_net", instantiate=True)
print(net)

# also support to get the configuration content of parsed `ConfigItem`
trainer = parser.get_parsed_content("trainer", instantiate=False)
print(trainer)
Parameters:
  • config – input config source to parse.

  • excludes – when importing modules to instantiate components, excluding components from modules specified in excludes.

  • globals – pre-import packages as global variables to ConfigExpression, so that expressions, for example, "$monai.data.list_data_collate" can use monai modules. The current supported globals and alias names are {"monai": "monai", "torch": "torch", "np": "numpy", "numpy": "numpy"}. These are MONAI’s minimal dependencies. Additional packages could be included with globals={“itk”: “itk”}. Set it to False to disable self.globals module importing.

See also

__contains__(id)[source]#

Returns True if id is stored in this configuration.

Parameters:

id – id to specify the expected position. See also __getitem__().

__getattr__(id)[source]#

Get the parsed result of ConfigItem with the specified id with default arguments (e.g. lazy=True, instantiate=True and eval_expr=True).

Parameters:

id – id of the ConfigItem.

__getitem__(id)[source]#

Get the config by id.

Parameters:

id – id of the ConfigItem, "::" (or "#") in id are interpreted as special characters to go one level further into the nested structures. Use digits indexing from “0” for list or other strings for dict. For example: "xform::5", "net::channels". "" indicates the entire self.config.

__init__(config=None, excludes=None, globals=None)[source]#
__repr__()[source]#

Return repr(self).

__setitem__(id, config)[source]#

Set config by id. Note that this method should be used before parse() or get_parsed_content() to ensure the updates are included in the parsed content.

Parameters:
  • id – id of the ConfigItem, "::" (or "#") in id are interpreted as special characters to go one level further into the nested structures. Use digits indexing from “0” for list or other strings for dict. For example: "xform::5", "net::channels". "" indicates the entire self.config.

  • config – config to set at location id.

__weakref__#

list of weak references to the object (if defined)

classmethod export_config_file(config, filepath, fmt='json', **kwargs)[source]#

Export the config content to the specified file path (currently support JSON and YAML files).

Parameters:
  • config (dict) – source config content to export.

  • filepath (Union[str, PathLike]) – target file path to save.

  • fmt (str) – format of config content, currently support "json" and "yaml".

  • kwargs (Any) – other arguments for json.dump or yaml.safe_dump, depends on the file format.

Return type:

None

get(id='', default=None)[source]#

Get the config by id.

Parameters:
  • id – id to specify the expected position. See also __getitem__().

  • default – default value to return if the specified id is invalid.

get_parsed_content(id='', **kwargs)[source]#

Get the parsed result of ConfigItem with the specified id.

  • If the item is ConfigComponent and instantiate=True, the result is the instance.

  • If the item is ConfigExpression and eval_expr=True, the result is the evaluated output.

  • Else, the result is the configuration content of ConfigItem.

Parameters:
  • id (str) – id of the ConfigItem, "::" (or "#") in id are interpreted as special characters to go one level further into the nested structures. Use digits indexing from “0” for list or other strings for dict. For example: "xform::5", "net::channels". "" indicates the entire self.config.

  • kwargs (Any) – additional keyword arguments to be passed to _resolve_one_item. Currently support lazy (whether to retain the current config cache, default to True), instantiate (whether to instantiate the ConfigComponent, default to True) and eval_expr (whether to evaluate the ConfigExpression, default to True), default (the default config item if the id is not in the config content).

Return type:

Any

classmethod load_config_file(filepath, **kwargs)[source]#

Load a single config file with specified file path (currently support JSON and YAML files).

Parameters:
  • filepath (Union[str, PathLike]) – path of target file to load, supported postfixes: .json, .yml, .yaml.

  • kwargs (Any) – other arguments for json.load or `yaml.safe_load, depends on the file format.

Return type:

dict

classmethod load_config_files(files, **kwargs)[source]#

Load multiple config files into a single config dict. The latter config file in the list will override or add the former config file. "::" (or "#") in the config keys are interpreted as special characters to go one level further into the nested structures.

Parameters:
  • files – path of target files to load, supported postfixes: .json, .yml, .yaml. if providing a list of files, will merge the content of them. if providing a string with comma separated file paths, will merge the content of them. if providing a dictionary, return it directly.

  • kwargs – other arguments for json.load or `yaml.safe_load, depends on the file format.

parse(reset=True)[source]#

Recursively resolve self.config to replace the macro tokens with target content. Then recursively parse the config source, add every item as ConfigItem to the reference resolver.

Parameters:

reset (bool) – whether to reset the reference_resolver before parsing. Defaults to True.

Return type:

None

read_config(f, **kwargs)[source]#

Read the config from specified JSON/YAML file or a dictionary and override the config content in the self.config dictionary.

Parameters:
  • f – filepath of the config file, the content must be a dictionary, if providing a list of files, wil merge the content of them. if providing a dictionary directly, use it as config.

  • kwargs – other arguments for json.load or yaml.safe_load, depends on the file format.

read_meta(f, **kwargs)[source]#

Read the metadata from specified JSON or YAML file. The metadata as a dictionary will be stored at self.config["_meta_"].

Parameters:
  • f – filepath of the metadata file, the content must be a dictionary, if providing a list of files, will merge the content of them. if providing a dictionary directly, use it as metadata.

  • kwargs – other arguments for json.load or yaml.safe_load, depends on the file format.

resolve_macro_and_relative_ids()[source]#

Recursively resolve self.config to replace the relative ids with absolute ids, for example, @##A means A in the upper level. and replace the macro tokens with target content, The macro tokens are marked as starting with “%”, can be from another structured file, like: "%default_net", "%/data/config.json::net".

classmethod resolve_relative_ids(id, value)[source]#

To simplify the reference or macro tokens ID in the nested config content, it’s available to use relative ID name which starts with the ID_SEP_KEY, for example, “@#A” means A in the same level, @##A means A in the upper level. It resolves the relative ids to absolute ids. For example, if the input data is:

{
    "A": 1,
    "B": {"key": "@##A", "value1": 2, "value2": "%#value1", "value3": [3, 4, "@#1"]},
}

It will resolve B to {“key”: “@A”, “value1”: 2, “value2”: “%B#value1”, “value3”: [3, 4, “@B#value3#1”]}.

Parameters:
  • id (str) – id name for current config item to compute relative id.

  • value (str) – input value to resolve relative ids.

Return type:

str

set(config, id='', recursive=True)[source]#

Set config by id.

Parameters:
  • config (Any) – config to set at location id.

  • id (str) – id to specify the expected position. See also __setitem__().

  • recursive (bool) – if the nested id doesn’t exist, whether to recursively create the nested items in the config. default to True. for the nested id, only support dict for the missing section.

Return type:

None

classmethod split_path_id(src)[source]#

Split src string into two parts: a config file path and component id. The file path should end with (json|yaml|yml). The component id should be separated by :: if it exists. If no path or no id, return “”.

Parameters:

src (str) – source string to split.

Return type:

tuple[str, str]

update(pairs)[source]#

Set the id and the corresponding config content in pairs, see also __setitem__(). For example, parser.update({"train::epoch": 100, "train::lr": 0.02})

Parameters:

pairs (dict[str, Any]) – dictionary of id and config pairs.

Return type:

None

Scripts#

monai.bundle.ckpt_export(net_id=None, filepath=None, ckpt_file=None, meta_file=None, config_file=None, key_in_ckpt=None, use_trace=None, input_shape=None, args_file=None, converter_kwargs=None, **override)[source]#

Export the model checkpoint to the given filepath with metadata and config included as JSON files.

Typical usage examples:

python -m monai.bundle ckpt_export network --filepath <export path> --ckpt_file <checkpoint path> ...
Parameters:
  • net_id – ID name of the network component in the config, it must be torch.nn.Module. Default to “network_def”.

  • filepath – filepath to export, if filename has no extension it becomes .ts. Default to “models/model.ts” under “os.getcwd()” if bundle_root is not specified.

  • ckpt_file – filepath of the model checkpoint to load. Default to “models/model.pt” under “os.getcwd()” if bundle_root is not specified.

  • meta_file – filepath of the metadata file, if it is a list of file paths, the content of them will be merged. Default to “configs/metadata.json” under “os.getcwd()” if bundle_root is not specified.

  • config_file – filepath of the config file to save in TorchScript model and extract network information, the saved key in the TorchScript model is the config filename without extension, and the saved config value is always serialized in JSON format no matter the original file format is JSON or YAML. it can be a single file or a list of files. if None, must be provided in args_file.

  • key_in_ckpt – for nested checkpoint like {“model”: XXX, “optimizer”: XXX, …}, specify the key of model weights. if not nested checkpoint, no need to set.

  • use_trace – whether using torch.jit.trace to convert the PyTorch model to TorchScript model.

  • input_shape – a shape used to generate the random input of the network, when converting the model to a TorchScript model. Should be a list like [N, C, H, W] or [N, C, H, W, D]. If not given, will try to parse from the metadata config.

  • args_file – a JSON or YAML file to provide default values for all the parameters of this function, so that the command line inputs can be simplified.

  • converter_kwargs – extra arguments that are needed by convert_to_torchscript, except ones that already exist in the input parameters.

  • override – id-value pairs to override or add the corresponding config content. e.g. --_meta#network_data_format#inputs#image#num_channels 3.

monai.bundle.trt_export(net_id=None, filepath=None, ckpt_file=None, meta_file=None, config_file=None, key_in_ckpt=None, precision=None, input_shape=None, use_trace=None, dynamic_batchsize=None, device=None, use_onnx=None, onnx_input_names=None, onnx_output_names=None, args_file=None, converter_kwargs=None, **override)[source]#

Export the model checkpoint to the given filepath as a TensorRT engine-based TorchScript. Currently, this API only supports converting models whose inputs are all tensors.

There are two ways to export a model: 1, Torch-TensorRT way: PyTorch module —> TorchScript module —> TensorRT engine-based TorchScript. 2, ONNX-TensorRT way: PyTorch module —> TorchScript module —> ONNX model —> TensorRT engine —> TensorRT engine-based TorchScript.

When exporting through the first way, some models suffer from the slowdown problem, since Torch-TensorRT may only convert a little part of the PyTorch model to the TensorRT engine. However when exporting through the second way, some Python data structures like dict are not supported. And some TorchScript models are not supported by the ONNX if exported through torch.jit.script.

Typical usage examples:

python -m monai.bundle trt_export --net_id <network definition> --filepath <export path>             --ckpt_file <checkpoint path> --input_shape <input shape> --dynamic_batchsize <batch range> ...
Parameters:
  • net_id – ID name of the network component in the config, it must be torch.nn.Module.

  • filepath – filepath to export, if filename has no extension, it becomes .ts.

  • ckpt_file – filepath of the model checkpoint to load.

  • meta_file – filepath of the metadata file, if it is a list of file paths, the content of them will be merged.

  • config_file – filepath of the config file to save in the TensorRT based TorchScript model and extract network information, the saved key in the model is the config filename without extension, and the saved config value is always serialized in JSON format no matter the original file format is JSON or YAML. it can be a single file or a list of files. if None, must be provided in args_file.

  • key_in_ckpt – for nested checkpoint like {“model”: XXX, “optimizer”: XXX, …}, specify the key of model weights. if not nested checkpoint, no need to set.

  • precision – the weight precision of the converted TensorRT engine based TorchScript models. Should be ‘fp32’ or ‘fp16’.

  • input_shape – the input shape that is used to convert the model. Should be a list like [N, C, H, W] or [N, C, H, W, D]. If not given, will try to parse from the metadata config.

  • use_trace – whether using torch.jit.trace to convert the PyTorch model to a TorchScript model and then convert to a TensorRT engine based TorchScript model or an ONNX model (if use_onnx is True).

  • dynamic_batchsize – a sequence with three elements to define the batch size range of the input for the model to be converted. Should be a sequence like [MIN_BATCH, OPT_BATCH, MAX_BATCH]. After converted, the batchsize of model input should between MIN_BATCH and MAX_BATCH and the OPT_BATCH is the best performance batchsize that the TensorRT tries to fit. The OPT_BATCH should be the most frequently used input batchsize in the application.

  • device – the target GPU index to convert and verify the model.

  • use_onnx – whether using the ONNX-TensorRT way to export the TensorRT engine-based TorchScript model.

  • onnx_input_names – optional input names of the ONNX model. This arg is only useful when use_onnx is True. Should be a sequence like [‘input_0’, ‘input_1’, …, ‘input_N’] where N equals to the number of the model inputs. If not given, will use [‘input_0’], which supposes the model only has one input.

  • onnx_output_names – optional output names of the ONNX model. This arg is only useful when use_onnx is True. Should be a sequence like [‘output_0’, ‘output_1’, …, ‘output_N’] where N equals to the number of the model outputs. If not given, will use [‘output_0’], which supposes the model only has one output.

  • args_file – a JSON or YAML file to provide default values for all the parameters of this function, so that the command line inputs can be simplified.

  • converter_kwargs – extra arguments that are needed by convert_to_trt, except ones that already exist in the input parameters.

  • override – id-value pairs to override or add the corresponding config content. e.g. --_meta#network_data_format#inputs#image#num_channels 3.

monai.bundle.onnx_export(net_id=None, filepath=None, ckpt_file=None, meta_file=None, config_file=None, key_in_ckpt=None, use_trace=None, input_shape=None, args_file=None, converter_kwargs=None, **override)[source]#

Export the model checkpoint to an onnx model.

Typical usage examples:

python -m monai.bundle onnx_export network --filepath <export path> --ckpt_file <checkpoint path> ...
Parameters:
  • net_id – ID name of the network component in the config, it must be torch.nn.Module.

  • filepath – filepath where the onnx model is saved to.

  • ckpt_file – filepath of the model checkpoint to load.

  • meta_file – filepath of the metadata file, if it is a list of file paths, the content of them will be merged.

  • config_file – filepath of the config file that contains extract network information,

  • key_in_ckpt – for nested checkpoint like {“model”: XXX, “optimizer”: XXX, …}, specify the key of model weights. if not nested checkpoint, no need to set.

  • use_trace – whether using torch.jit.trace to convert the pytorch model to torchscript model.

  • input_shape – a shape used to generate the random input of the network, when converting the model to an onnx model. Should be a list like [N, C, H, W] or [N, C, H, W, D]. If not given, will try to parse from the metadata config.

  • args_file – a JSON or YAML file to provide default values for all the parameters of this function, so that the command line inputs can be simplified.

  • converter_kwargs – extra arguments that are needed by convert_to_onnx, except ones that already exist in the input parameters.

  • override – id-value pairs to override or add the corresponding config content. e.g. --_meta#network_data_format#inputs#image#num_channels 3.

monai.bundle.download(name=None, version=None, bundle_dir=None, source='monaihosting', repo=None, url=None, remove_prefix='monai_', progress=True, args_file=None)[source]#

download bundle from the specified source or url. The bundle should be a zip file and it will be extracted after downloading. This function refers to: https://pytorch.org/docs/stable/_modules/torch/hub.html

Typical usage examples:

# Execute this module as a CLI entry, and download bundle from the model-zoo repo:
python -m monai.bundle download --name <bundle_name> --version "0.1.0" --bundle_dir "./"

# Execute this module as a CLI entry, and download bundle from specified github repo:
python -m monai.bundle download --name <bundle_name> --source "github" --repo "repo_owner/repo_name/release_tag"

# Execute this module as a CLI entry, and download bundle from ngc with latest version:
python -m monai.bundle download --name <bundle_name> --source "ngc" --bundle_dir "./"

# Execute this module as a CLI entry, and download bundle from monaihosting with latest version:
python -m monai.bundle download --name <bundle_name> --source "monaihosting" --bundle_dir "./"

# Execute this module as a CLI entry, and download bundle from Hugging Face Hub:
python -m monai.bundle download --name "bundle_name" --source "huggingface_hub" --repo "repo_owner/repo_name"

# Execute this module as a CLI entry, and download bundle via URL:
python -m monai.bundle download --name <bundle_name> --url <url>

# Set default args of `run` in a JSON / YAML file, help to record and simplify the command line.
# Other args still can override the default args at runtime.
# The content of the JSON / YAML file is a dictionary. For example:
# {"name": "spleen", "bundle_dir": "download", "source": ""}
# then do the following command for downloading:
python -m monai.bundle download --args_file "args.json" --source "github"
Parameters:
  • name – bundle name. If None and url is None, it must be provided in args_file. for example: “spleen_ct_segmentation”, “prostate_mri_anatomy” in model-zoo: Project-MONAI/model-zoo. “monai_brats_mri_segmentation” in ngc: https://catalog.ngc.nvidia.com/models?filters=&orderBy=scoreDESC&query=monai.

  • version – version name of the target bundle to download, like: “0.1.0”. If None, will download the latest version (or the last commit to the main branch in the case of Hugging Face Hub).

  • bundle_dir – target directory to store the downloaded data. Default is bundle subfolder under torch.hub.get_dir().

  • source – storage location name. This argument is used when url is None. In default, the value is achieved from the environment variable BUNDLE_DOWNLOAD_SRC, and it should be “ngc”, “monaihosting”, “github”, or “huggingface_hub”.

  • repo – repo name. This argument is used when url is None and source is “github” or “huggingface_hub”. If source is “github”, it should be in the form of “repo_owner/repo_name/release_tag”. If source is “huggingface_hub”, it should be in the form of “repo_owner/repo_name”.

  • url – url to download the data. If not None, data will be downloaded directly and source will not be checked. If name is None, filename is determined by monai.apps.utils._basename(url).

  • remove_prefix – This argument is used when source is “ngc”. Currently, all ngc bundles have the monai_ prefix, which is not existing in their model zoo contrasts. In order to maintain the consistency between these two sources, remove prefix is necessary. Therefore, if specified, downloaded folder name will remove the prefix.

  • progress – whether to display a progress bar.

  • args_file – a JSON or YAML file to provide default values for all the args in this function. so that the command line inputs can be simplified.

monai.bundle.load(name, model=None, version=None, workflow_type='train', model_file=None, load_ts_module=False, bundle_dir=None, source='monaihosting', repo=None, remove_prefix='monai_', progress=True, device=None, key_in_ckpt=None, config_files=(), workflow_name=None, args_file=None, copy_model_args=None, return_state_dict=True, net_override=None, net_name=None, **net_kwargs)[source]#

Load model weights or TorchScript module of a bundle.

Parameters:
  • name – bundle name. If None and url is None, it must be provided in args_file. for example: “spleen_ct_segmentation”, “prostate_mri_anatomy” in model-zoo: Project-MONAI/model-zoo. “monai_brats_mri_segmentation” in ngc: https://catalog.ngc.nvidia.com/models?filters=&orderBy=scoreDESC&query=monai. “mednist_gan” in monaihosting: https://api.ngc.nvidia.com/v2/models/nvidia/monaihosting/mednist_gan/versions/0.2.0/files/mednist_gan_v0.2.0.zip

  • model – a pytorch module to be updated. Default to None, using the “network_def” in the bundle.

  • version – version name of the target bundle to download, like: “0.1.0”. If None, will download the latest version. If source is “huggingface_hub”, this argument is a Git revision id.

  • workflow_type – specifies the workflow type: “train” or “training” for a training workflow, or “infer”, “inference”, “eval”, “evaluation” for a inference workflow, other unsupported string will raise a ValueError. default to train for training workflow.

  • model_file – the relative path of the model weights or TorchScript module within bundle. If None, “models/model.pt” or “models/model.ts” will be used.

  • load_ts_module – a flag to specify if loading the TorchScript module.

  • bundle_dir – directory the weights/TorchScript module will be loaded from. Default is bundle subfolder under torch.hub.get_dir().

  • source – storage location name. This argument is used when model_file is not existing locally and need to be downloaded first. In default, the value is achieved from the environment variable BUNDLE_DOWNLOAD_SRC, and it should be “ngc”, “monaihosting”, “github”, or “huggingface_hub”.

  • repo – repo name. This argument is used when url is None and source is “github” or “huggingface_hub”. If source is “github”, it should be in the form of “repo_owner/repo_name/release_tag”. If source is “huggingface_hub”, it should be in the form of “repo_owner/repo_name”.

  • remove_prefix – This argument is used when source is “ngc”. Currently, all ngc bundles have the monai_ prefix, which is not existing in their model zoo contrasts. In order to maintain the consistency between these three sources, remove prefix is necessary. Therefore, if specified, downloaded folder name will remove the prefix.

  • progress – whether to display a progress bar when downloading.

  • device – target device of returned weights or module, if None, prefer to “cuda” if existing.

  • key_in_ckpt – for nested checkpoint like {“model”: XXX, “optimizer”: XXX, …}, specify the key of model weights. if not nested checkpoint, no need to set.

  • config_files – extra filenames would be loaded. The argument only works when loading a TorchScript module, see _extra_files in torch.jit.load for more details.

  • workflow_name – specified bundle workflow name, should be a string or class, default to “ConfigWorkflow”.

  • args_file – a JSON or YAML file to provide default values for all the args in “download” function.

  • copy_model_args – other arguments for the monai.networks.copy_model_state function.

  • return_state_dict – whether to return state dict, if True, return state_dict, else a corresponding network from _workflow.network_def will be instantiated and load the achieved weights.

  • net_override – id-value pairs to override the parameters in the network of the bundle, default to None.

  • net_name – if not None, a corresponding network will be instantiated and load the achieved weights. This argument only works when loading weights.

  • net_kwargs – other arguments that are used to instantiate the network class defined by net_name.

Returns:

  1. If load_ts_module is False and model is None,

    return model weights if can’t find “network_def” in the bundle, else return an instantiated network that loaded the weights.

  2. If load_ts_module is False and model is not None,

    return an instantiated network that loaded the weights.

  3. If load_ts_module is True, return a triple that include a TorchScript module,

    the corresponding metadata dict, and extra files dict. please check monai.data.load_net_with_metadata for more details.

  4. If return_state_dict is True, return model weights, only used for compatibility

    when model and net_name are all None.

monai.bundle.get_all_bundles_list(repo='Project-MONAI/model-zoo', tag='dev', auth_token=None)[source]#

Get all bundles names (and the latest versions) that are stored in the release of specified repository with the provided tag. If tag is “dev”, will get model information from https://raw.githubusercontent.com/repo_owner/repo_name/dev/models/model_info.json. The default values of arguments correspond to the release of MONAI model zoo. In order to increase the rate limits of calling Github APIs, you can input your personal access token. Please check the following link for more details about rate limiting: https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting

The following link shows how to create your personal access token: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token

Parameters:
  • repo – it should be in the form of “repo_owner/repo_name/”.

  • tag – the tag name of the release.

  • auth_token – github personal access token.

Returns:

a list of tuple in the form of (bundle name, latest version).

monai.bundle.get_bundle_info(bundle_name, version=None, repo='Project-MONAI/model-zoo', tag='dev', auth_token=None)[source]#

Get all information (include “name” and “browser_download_url”) of a bundle with the specified bundle name and version which is stored in the release of specified repository with the provided tag. In order to increase the rate limits of calling Github APIs, you can input your personal access token. Please check the following link for more details about rate limiting: https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting

The following link shows how to create your personal access token: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token

Parameters:
  • bundle_name – bundle name.

  • version – version name of the target bundle, if None, the latest version will be used.

  • repo – it should be in the form of “repo_owner/repo_name/”.

  • tag – the tag name of the release.

  • auth_token – github personal access token.

Returns:

a dictionary that contains the bundle’s information.

monai.bundle.get_bundle_versions(bundle_name, repo='Project-MONAI/model-zoo', tag='dev', auth_token=None)[source]#

Get the latest version, as well as all existing versions of a bundle that is stored in the release of specified repository with the provided tag. If tag is “dev”, will get model information from https://raw.githubusercontent.com/repo_owner/repo_name/dev/models/model_info.json. In order to increase the rate limits of calling Github APIs, you can input your personal access token. Please check the following link for more details about rate limiting: https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting

The following link shows how to create your personal access token: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token

Parameters:
  • bundle_name – bundle name.

  • repo – it should be in the form of “repo_owner/repo_name/”.

  • tag – the tag name of the release.

  • auth_token – github personal access token.

Returns:

a dictionary that contains the latest version and all versions of a bundle.

monai.bundle.run(run_id=None, init_id=None, final_id=None, meta_file=None, config_file=None, logging_file=None, tracking=None, args_file=None, **override)[source]#

Specify config_file to run monai bundle components and workflows.

Typical usage examples:

# Execute this module as a CLI entry:
python -m monai.bundle run --meta_file <meta path> --config_file <config path>

# Execute with specified `run_id=training`:
python -m monai.bundle run training --meta_file <meta path> --config_file <config path>

# Execute with all specified `run_id=runtest`, `init_id=inittest`, `final_id=finaltest`:
python -m monai.bundle run --run_id runtest --init_id inittest --final_id finaltest ...

# Override config values at runtime by specifying the component id and its new value:
python -m monai.bundle run --net#input_chns 1 ...

# Override config values with another config file `/path/to/another.json`:
python -m monai.bundle run --net %/path/to/another.json ...

# Override config values with part content of another config file:
python -m monai.bundle run --net %/data/other.json#net_arg ...

# Set default args of `run` in a JSON / YAML file, help to record and simplify the command line.
# Other args still can override the default args at runtime:
python -m monai.bundle run --args_file "/workspace/data/args.json" --config_file <config path>
Parameters:
  • run_id – ID name of the expected config expression to run, default to “run”. to run the config, the target config must contain this ID.

  • init_id – ID name of the expected config expression to initialize before running, default to “initialize”. it’s optional for both configs and this run function.

  • final_id – ID name of the expected config expression to finalize after running, default to “finalize”. it’s optional for both configs and this run function.

  • meta_file – filepath of the metadata file, if it is a list of file paths, the content of them will be merged. Default to None.

  • config_file – filepath of the config file, if None, must be provided in args_file. if it is a list of file paths, the content of them will be merged.

  • logging_file – config file for logging module in the program. for more details: https://docs.python.org/3/library/logging.config.html#logging.config.fileConfig. Default to None.

  • tracking

    if not None, enable the experiment tracking at runtime with optionally configurable and extensible. If “mlflow”, will add MLFlowHandler to the parsed bundle with default tracking settings where a set of common parameters shown below will be added and can be passed through the override parameter of this method.

    • "output_dir": the path to save mlflow tracking outputs locally, default to “<bundle root>/eval”.

    • "tracking_uri": uri to save mlflow tracking outputs, default to “/output_dir/mlruns”.

    • "experiment_name": experiment name for this run, default to “monai_experiment”.

    • "run_name": the name of current run.

    • "save_execute_config": whether to save the executed config files. It can be False, /path/to/artifacts or True. If set to True, will save to the default path “<bundle_root>/eval”. Default to True.

    If other string, treat it as file path to load the tracking settings. If dict, treat it as tracking settings. Will patch the target config content with tracking handlers and the top-level items of configs. for detailed usage examples, please check the tutorial: Project-MONAI/tutorials.

  • args_file – a JSON or YAML file to provide default values for run_id, meta_file, config_file, logging, and override pairs. so that the command line inputs can be simplified.

  • override – id-value pairs to override or add the corresponding config content. e.g. --net#input_chns 42, --net %/data/other.json#net_arg.

monai.bundle.verify_metadata(meta_file=None, filepath=None, create_dir=None, hash_val=None, hash_type=None, args_file=None, **kwargs)[source]#

Verify the provided metadata file based on the predefined schema. metadata content must contain the schema field for the URL of schema file to download. The schema standard follows: http://json-schema.org/.

Parameters:
  • meta_file – filepath of the metadata file to verify, if None, must be provided in args_file. if it is a list of file paths, the content of them will be merged.

  • filepath – file path to store the downloaded schema.

  • create_dir – whether to create directories if not existing, default to True.

  • hash_val – if not None, define the hash value to verify the downloaded schema file.

  • hash_type – if not None, define the hash type to verify the downloaded schema file. Defaults to “md5”.

  • args_file – a JSON or YAML file to provide default values for all the args in this function. so that the command line inputs can be simplified.

  • kwargs – other arguments for jsonschema.validate(). for more details: https://python-jsonschema.readthedocs.io/en/stable/validate/#jsonschema.validate.

monai.bundle.verify_net_in_out(net_id=None, meta_file=None, config_file=None, device=None, p=None, n=None, any=None, extra_forward_args=None, args_file=None, **override)[source]#

Verify the input and output data shape and data type of network defined in the metadata. Will test with fake Tensor data according to the required data shape in metadata.

Typical usage examples:

python -m monai.bundle verify_net_in_out network --meta_file <meta path> --config_file <config path>
Parameters:
  • net_id – ID name of the network component to verify, it must be torch.nn.Module.

  • meta_file – filepath of the metadata file to get network args, if None, must be provided in args_file. if it is a list of file paths, the content of them will be merged.

  • config_file – filepath of the config file to get network definition, if None, must be provided in args_file. if it is a list of file paths, the content of them will be merged.

  • device – target device to run the network forward computation, if None, prefer to “cuda” if existing.

  • p – power factor to generate fake data shape if dim of expected shape is “x**p”, default to 1.

  • n – multiply factor to generate fake data shape if dim of expected shape is “x*n”, default to 1.

  • any – specified size to generate fake data shape if dim of expected shape is “*”, default to 1.

  • extra_forward_args – a dictionary that contains other args for the forward function of the network. Default to an empty dictionary.

  • args_file – a JSON or YAML file to provide default values for net_id, meta_file, config_file, device, p, n, any, and override pairs. so that the command line inputs can be simplified.

  • override – id-value pairs to override or add the corresponding config content. e.g. --_meta#network_data_format#inputs#image#num_channels 3.

monai.bundle.init_bundle(bundle_dir, ckpt_file=None, network=None, dataset_license=False, metadata_str=None, inference_str=None)[source]#

Initialise a new bundle directory with some default configuration files and optionally network weights.

Typical usage example:

python -m monai.bundle init_bundle /path/to/bundle_dir network_ckpt.pt
Parameters:
  • bundle_dir – directory name to create, must not exist but parent direct must exist

  • ckpt_file – optional checkpoint file to copy into bundle

  • network – if given instead of ckpt_file this network’s weights will be stored in bundle

  • dataset_license – if True, a default license file called “data_license.txt” will be produced. This file is required if there are any license conditions stated for data your bundle uses.

  • metadata_str – optional metadata string to write to bundle, if not given a default will be used.

  • inference_str – optional inference string to write to bundle, if not given a default will be used.

monai.bundle.push_to_hf_hub(repo, name, bundle_dir, token=None, private=True, version=None, tag_as_latest_version=False, **upload_folder_kwargs)[source]#

Push a MONAI bundle to the Hugging Face Hub.

Typical usage examples:

python -m monai.bundle push_to_hf_hub --repo <HF repository id> --name <bundle name>             --bundle_dir <bundle directory> --version <version> ...
Parameters:
  • repo – namespace (user or organization) and a repo name separated by a /, e.g. hf_username/bundle_name

  • bundle_name – name of the bundle directory to push.

  • bundle_dir – path to the bundle directory.

  • token – Hugging Face authentication token. Default is None (will default to the stored token).

  • private – Private visibility of the repository on Hugging Face. Default is True.

  • version_name – Name of the version tag to create. Default is None (no version tag is created).

  • tag_as_latest_version – Whether to tag the commit as latest_version. This version will downloaded by default when using bundle.download(). Default is False.

  • upload_folder_kwargs – Keyword arguments to pass to HfApi.upload_folder.

Returns:

URL of the Hugging Face repo

Return type:

repo_url

monai.bundle.update_kwargs(args=None, ignore_none=True, **kwargs)[source]#

Update the args dictionary with the input kwargs. For dict data, recursively update the content based on the keys.

Example:

from monai.bundle import update_kwargs
update_kwargs({'exist': 1}, exist=2, new_arg=3)
# return {'exist': 2, 'new_arg': 3}
Parameters:
  • args – source args dictionary (or a json/yaml filename to read as dictionary) to update.

  • ignore_none – whether to ignore input args with None value, default to True.

  • kwargs – key=value pairs to be merged into args.