Event handlers

Model checkpoint loader

class monai.handlers.CheckpointLoader(load_path, load_dict, name=None, map_location=None, strict=True, strict_shape=True)[source]

CheckpointLoader acts as an Ignite handler to load checkpoint data from file. It can load variables for network, optimizer, lr_scheduler, etc. If saving checkpoint after torch.nn.DataParallel, need to save model.module instead as PyTorch recommended and then use this loader to load the model.

Parameters
  • load_path (str) – the file path of checkpoint, it should be a PyTorch pth file.

  • load_dict (Dict) –

    target objects that load checkpoint to. examples:

    {'network': net, 'optimizer': optimizer, 'lr_scheduler': lr_scheduler}
    

  • name (Optional[str]) – identifier of logging.logger to use, if None, defaulting to engine.logger.

  • map_location (Optional[Dict]) – when loading the module for distributed training/evaluation, need to provide an appropriate map_location argument to prevent a process to step into others’ devices. If map_location is missing, torch.load will first load the module to CPU and then copy each parameter to where it was saved, which would result in all processes on the same machine using the same set of devices.

  • strict (bool) – whether to strictly enforce that the keys and data shape in the state_dict of every item of load_dict match the state_dict of the corresponding items of checkpoint, default to True.

  • strict_shape (bool) – whether to enforce the data shape of the matched layers in the checkpoint, if `False, it will skip the layers that have different data shape with checkpoint content, and ignore the strict arg. this can be useful advanced feature for transfer learning. users should totally understand which layers will have different shape. default to True.

Note: if strict_shape=False, will only load checkpoint for torch.nn.Module and skip other

items in the load_dict. For example, if the shape of some layers in current model can’t match the checkpoint, the parameter_group of current optimizer may also can’t match the checkpoint, so skip loading checkpoint for optimizer.

For more details about loading checkpoint, please refer to: https://pytorch.org/ignite/v0.4.5/generated/ignite.handlers.checkpoint.Checkpoint.html #ignite.handlers.checkpoint.Checkpoint.load_objects. https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.load_state_dict.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

Model checkpoint saver

class monai.handlers.CheckpointSaver(save_dir, save_dict, name=None, file_prefix='', save_final=False, final_filename=None, save_key_metric=False, key_metric_name=None, key_metric_n_saved=1, key_metric_filename=None, key_metric_save_state=False, key_metric_greater_or_equal=False, key_metric_negative_sign=False, epoch_level=True, save_interval=0, n_saved=None)[source]

CheckpointSaver acts as an Ignite handler to save checkpoint data into files. It supports to save according to metrics result, epoch number, iteration number and last model or exception.

Parameters
  • save_dir (str) – the target directory to save the checkpoints.

  • save_dict (Dict) –

    source objects that save to the checkpoint. examples:

    {'network': net, 'optimizer': optimizer, 'lr_scheduler': lr_scheduler}
    

  • name (Optional[str]) – identifier of logging.logger to use, if None, defaulting to engine.logger.

  • file_prefix (str) – prefix for the filenames to which objects will be saved.

  • save_final (bool) – whether to save checkpoint or session at final iteration or exception. If checkpoints are to be saved when an exception is raised, put this handler before StatsHandler in the handler list, because the logic with Ignite can only trigger the first attached handler for EXCEPTION_RAISED event.

  • final_filename (Optional[str]) – set a fixed filename to save the final model if save_final=True. If None, default to checkpoint_final_iteration=N.pt.

  • save_key_metric (bool) – whether to save checkpoint or session when the value of key_metric is higher than all the previous values during training.keep 4 decimal places of metric, checkpoint name is: {file_prefix}_key_metric=0.XXXX.pth.

  • key_metric_name (Optional[str]) – the name of key_metric in ignite metrics dictionary. If None, use engine.state.key_metric instead.

  • key_metric_n_saved (int) – save top N checkpoints or sessions, sorted by the value of key metric in descending order.

  • key_metric_filename (Optional[str]) – set a fixed filename to set the best metric model, if not None, key_metric_n_saved should be 1 and only keep the best metric model.

  • key_metric_save_state (bool) – whether to save the tracking list of key metric in the checkpoint file. if True, then will save an object in the checkpoint file with key checkpointer to be consistent with the include_self arg of Checkpoint in ignite: https://pytorch.org/ignite/v0.4.5/generated/ignite.handlers.checkpoint.Checkpoint.html. typically, it’s used to resume training and compare current metric with previous N values.

  • key_metric_greater_or_equal (bool) – if True, the latest equally scored model is stored. Otherwise, save the the first equally scored model. default to False.

  • key_metric_negative_sign (bool) – whether adding a negative sign to the metric score to compare metrics, because for error-like metrics, smaller is better(objects with larger score are retained). default to False.

  • epoch_level (bool) – save checkpoint during training for every N epochs or every N iterations. True is epoch level, False is iteration level.

  • save_interval (int) – save checkpoint every N epochs, default is 0 to save no checkpoint.

  • n_saved (Optional[int]) – save latest N checkpoints of epoch level or iteration level, ‘None’ is to save all.

Note

CheckpointHandler can be used during training, validation or evaluation. example of saved files:

  • checkpoint_iteration=400.pt

  • checkpoint_iteration=800.pt

  • checkpoint_epoch=1.pt

  • checkpoint_final_iteration=1000.pt

  • checkpoint_key_metric=0.9387.pt

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

completed(engine)[source]

Callback for train or validation/evaluation completed Event. Save final checkpoint if configure save_final is True.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

exception_raised(engine, e)[source]

Callback for train or validation/evaluation exception raised Event. Save current data as final checkpoint if configure save_final is True. This callback may be skipped because the logic with Ignite can only trigger the first attached handler for EXCEPTION_RAISED event.

Parameters
  • engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

  • e (Exception) – the exception caught in Ignite during engine.run().

Return type

None

interval_completed(engine)[source]

Callback for train epoch/iteration completed Event. Save checkpoint if configure save_interval = N

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

load_state_dict(state_dict)[source]

Utility to resume the internal state of key metric tracking list if configured to save checkpoints based on the key metric value. Note to set key_metric_save_state=True when saving the previous checkpoint.

Example:

CheckpointSaver(
    ...
    save_key_metric=True,
    key_metric_save_state=True,  # config to also save the state of this saver
).attach(engine)
engine.run(...)

# resumed training with a new CheckpointSaver
saver = CheckpointSaver(save_key_metric=True, ...)
# load the previous key metric tracking list into saver
CheckpointLoader("/test/model.pt"), {"checkpointer": saver}).attach(engine)
Return type

None

metrics_completed(engine)[source]

Callback to compare metrics and save models in train or validation when epoch completed.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

Metrics saver

class monai.handlers.MetricsSaver(save_dir, metrics='*', metric_details=None, batch_transform=<function MetricsSaver.<lambda>>, summary_ops=None, save_rank=0, delimiter='\\t', output_type='csv')[source]

ignite handler to save metrics values and details into expected files.

Parameters
  • save_dir (str) – directory to save the metrics and metric details.

  • metrics (Union[str, Sequence[str], None]) – expected final metrics to save into files, can be: None, “*” or list of strings. None - don’t save any metrics into files. “*” - save all the existing metrics in engine.state.metrics dict into separate files. list of strings - specify the expected metrics to save. default to “*” to save all the metrics into metrics.csv.

  • metric_details (Union[str, Sequence[str], None]) – expected metric details to save into files, the data comes from engine.state.metric_details, which should be provided by different Metrics, typically, it’s some intermediate values in metric computation. for example: mean dice of every channel of every image in the validation dataset. it must contain at least 2 dims: (batch, classes, …), if not, will unsequeeze to 2 dims. this arg can be: None, “*” or list of strings. None - don’t save any metric_details into files. “*” - save all the existing metric_details in engine.state.metric_details dict into separate files. list of strings - specify the metric_details of expected metrics to save. if not None, every metric_details array will save a separate {metric name}_raw.csv file.

  • batch_transform (Callable) – a callable that is used to extract the meta_data dictionary of the input images from ignite.engine.state.batch if saving metric details. the purpose is to get the input filenames from the meta_data and store with metric details together.

  • summary_ops (Union[str, Sequence[str], None]) –

    expected computation operations to generate the summary report. it can be: None, “*” or list of strings, default to None. None - don’t generate summary report for every expected metric_details. “*” - generate summary report for every metric_details with all the supported operations. list of strings - generate summary report for every metric_details with specified operations, they should be within list: [“mean”, “median”, “max”, “min”, “<int>percentile”, “std”, “notnans”]. the number in “<int>percentile” should be [0, 100], like: “15percentile”. default: “90percentile”. for more details, please check: https://numpy.org/doc/stable/reference/generated/numpy.nanpercentile.html. note that: for the overall summary, it computes nanmean of all classes for each image first, then compute summary. example of the generated summary report:

    class    mean    median    max    5percentile 95percentile  notnans
    class0  6.0000   6.0000   7.0000   5.1000      6.9000       2.0000
    class1  6.0000   6.0000   6.0000   6.0000      6.0000       1.0000
    mean    6.2500   6.2500   7.0000   5.5750      6.9250       2.0000
    

  • save_rank (int) – only the handler on specified rank will save to files in multi-gpus validation, default to 0.

  • delimiter (str) – the delimiter character in CSV file, default to ” “.

  • output_type (str) – expected output file type, supported types: [“csv”], default to “csv”.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

CSV saver

class monai.handlers.ClassificationSaver(output_dir='./', filename='predictions.csv', overwrite=True, batch_transform=<function ClassificationSaver.<lambda>>, output_transform=<function ClassificationSaver.<lambda>>, name=None, save_rank=0, saver=None)[source]

Event handler triggered on completing every iteration to save the classification predictions as CSV file. If running in distributed data parallel, only saves CSV file in the specified rank.

Parameters
  • output_dir (str) – if saver=None, output CSV file directory.

  • filename (str) – if saver=None, name of the saved CSV file name.

  • overwrite (bool) – if saver=None, whether to overwriting existing file content, if True, will clear the file before saving. otherwise, will apend new content to the file.

  • batch_transform (Callable) – a callable that is used to extract the meta_data dictionary of the input images from ignite.engine.state.batch. the purpose is to get the input filenames from the meta_data and store with classification results together.

  • output_transform (Callable) – a callable that is used to extract the model prediction data from ignite.engine.state.output. the first dimension of its output will be treated as the batch dimension. each item in the batch will be saved individually.

  • name (Optional[str]) – identifier of logging.logger to use, defaulting to engine.logger.

  • save_rank (int) – only the handler on specified rank will save to CSV file in multi-gpus validation, default to 0.

  • saver (Optional[CSVSaver]) – the saver instance to save classification results, if None, create a CSVSaver internally. the saver must provide save_batch(batch_data, meta_data) and finalize() APIs.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

Ignite Metric

class monai.handlers.IgniteMetric(metric_fn, output_transform=<function IgniteMetric.<lambda>>, save_details=True)[source]

Base Metric class based on ignite event handler mechanism. The input prediction or label data can be a PyTorch Tensor or numpy array with batch dim and channel dim, or a list of PyTorch Tensor or numpy array without batch dim.

Parameters
  • metric_fn (CumulativeIterationMetric) – callable function or class to compute raw metric results after every iteration. expect to return a Tensor with shape (batch, channel, …) or tuple (Tensor, not_nans).

  • output_transform (Callable) – callable to extract y_pred and y from ignite.engine.state.output then construct (y_pred, y) pair, where y_pred and y can be batch-first Tensors or lists of channel-first Tensors. the form of (y_pred, y) is required by the update(). for example: if ignite.engine.state.output is {“pred”: xxx, “label”: xxx, “other”: xxx}, output_transform can be lambda x: (x[“pred”], x[“label”]).

  • save_details (bool) – whether to save metric computation details per image, for example: mean_dice of every image. default to True, will save to engine.state.metric_details dict with the metric name as key.

attach(engine, name)[source]

Attaches current metric to provided engine. On the end of engine’s run, engine.state.metrics dictionary will contain computed metric’s value under provided name.

Parameters
  • engine (Engine) – the engine to which the metric must be attached.

  • name (str) – the name of the metric to attach.

Return type

None

compute()[source]
Raises

NotComputableError – When compute is called before an update occurs.

Return type

Any

reset()[source]

Resets the metric to it’s initial state.

By default, this is called at the start of each epoch.

Return type

None

update(output)[source]
Parameters

output (Sequence[Tensor]) – sequence with contents [y_pred, y].

Raises

ValueError – When output length is not 2. metric_fn can only support y_pred and y.

Return type

None

Mean Dice metrics handler

class monai.handlers.MeanDice(include_background=True, output_transform=<function MeanDice.<lambda>>, save_details=True)[source]

Computes Dice score metric from full size Tensor and collects average over batch, class-channels, iterations.

Parameters
  • include_background (bool) – whether to include dice computation on the first channel of the predicted output. Defaults to True.

  • output_transform (Callable) – callable to extract y_pred and y from ignite.engine.state.output then construct (y_pred, y) pair, where y_pred and y can be batch-first Tensors or lists of channel-first Tensors. the form of (y_pred, y) is required by the update(). for example: if ignite.engine.state.output is {“pred”: xxx, “label”: xxx, “other”: xxx}, output_transform can be lambda x: (x[“pred”], x[“label”]).

  • save_details (bool) – whether to save metric computation details per image, for example: mean dice of every image. default to True, will save to engine.state.metric_details dict with the metric name as key.

See also

monai.metrics.meandice.compute_meandice()

ROC AUC metrics handler

class monai.handlers.ROCAUC(average=<Average.MACRO: 'macro'>, output_transform=<function ROCAUC.<lambda>>)[source]

Computes Area Under the Receiver Operating Characteristic Curve (ROC AUC). accumulating predictions and the ground-truth during an epoch and applying compute_roc_auc.

Parameters
  • average (Union[Average, str]) –

    {"macro", "weighted", "micro", "none"} Type of averaging performed if not binary classification. Defaults to "macro".

    • "macro": calculate metrics for each label, and find their unweighted mean.

      This does not take label imbalance into account.

    • "weighted": calculate metrics for each label, and find their average,

      weighted by support (the number of true instances for each label).

    • "micro": calculate metrics globally by considering each element of the label

      indicator matrix as a label.

    • "none": the scores for each class are returned.

  • output_transform (Callable) – callable to extract y_pred and y from ignite.engine.state.output then construct (y_pred, y) pair, where y_pred and y can be batch-first Tensors or lists of channel-first Tensors. the form of (y_pred, y) is required by the update(). for example: if ignite.engine.state.output is {“pred”: xxx, “label”: xxx, “other”: xxx}, output_transform can be lambda x: (x[“pred”], x[“label”]).

Note

ROCAUC expects y to be comprised of 0’s and 1’s. y_pred must either be probability estimates or confidence values.

Confusion matrix metrics handler

class monai.handlers.ConfusionMatrix(include_background=True, metric_name='hit_rate', output_transform=<function ConfusionMatrix.<lambda>>, save_details=True)[source]

Compute confusion matrix related metrics from full size Tensor and collects average over batch, class-channels, iterations.

Parameters
  • include_background (bool) – whether to skip metric computation on the first channel of the predicted output. Defaults to True.

  • metric_name (str) – ["sensitivity", "specificity", "precision", "negative predictive value", "miss rate", "fall out", "false discovery rate", "false omission rate", "prevalence threshold", "threat score", "accuracy", "balanced accuracy", "f1 score", "matthews correlation coefficient", "fowlkes mallows index", "informedness", "markedness"] Some of the metrics have multiple aliases (as shown in the wikipedia page aforementioned), and you can also input those names instead.

  • output_transform (Callable) – callable to extract y_pred and y from ignite.engine.state.output then construct (y_pred, y) pair, where y_pred and y can be batch-first Tensors or lists of channel-first Tensors. the form of (y_pred, y) is required by the update(). for example: if ignite.engine.state.output is {“pred”: xxx, “label”: xxx, “other”: xxx}, output_transform can be lambda x: (x[“pred”], x[“label”]).

  • save_details (bool) – whether to save metric computation details per image, for example: TP/TN/FP/FN of every image. default to True, will save to engine.state.metric_details dict with the metric name as key.

See also

monai.metrics.confusion_matrix()

Hausdorff distance metrics handler

class monai.handlers.HausdorffDistance(include_background=False, distance_metric='euclidean', percentile=None, directed=False, output_transform=<function HausdorffDistance.<lambda>>, save_details=True)[source]

Computes Hausdorff distance from full size Tensor and collects average over batch, class-channels, iterations.

Parameters
  • include_background (bool) – whether to include distance computation on the first channel of the predicted output. Defaults to False.

  • distance_metric (str) – : ["euclidean", "chessboard", "taxicab"] the metric used to compute surface distance. Defaults to "euclidean".

  • percentile (Optional[float]) – an optional float number between 0 and 100. If specified, the corresponding percentile of the Hausdorff Distance rather than the maximum result will be achieved. Defaults to None.

  • directed (bool) – whether to calculate directed Hausdorff distance. Defaults to False.

  • output_transform (Callable) – callable to extract y_pred and y from ignite.engine.state.output then construct (y_pred, y) pair, where y_pred and y can be batch-first Tensors or lists of channel-first Tensors. the form of (y_pred, y) is required by the update(). for example: if ignite.engine.state.output is {“pred”: xxx, “label”: xxx, “other”: xxx}, output_transform can be lambda x: (x[“pred”], x[“label”]).

  • save_details (bool) – whether to save metric computation details per image, for example: hausdorff distance of every image. default to True, will save to engine.state.metric_details dict with the metric name as key.

Surface distance metrics handler

class monai.handlers.SurfaceDistance(include_background=False, symmetric=False, distance_metric='euclidean', output_transform=<function SurfaceDistance.<lambda>>, save_details=True)[source]

Computes surface distance from full size Tensor and collects average over batch, class-channels, iterations.

Parameters
  • include_background (bool) – whether to include distance computation on the first channel of the predicted output. Defaults to False.

  • symmetric (bool) – whether to calculate the symmetric average surface distance between seg_pred and seg_gt. Defaults to False.

  • distance_metric (str) – : ["euclidean", "chessboard", "taxicab"] the metric used to compute surface distance. Defaults to "euclidean".

  • output_transform (Callable) – callable to extract y_pred and y from ignite.engine.state.output then construct (y_pred, y) pair, where y_pred and y can be batch-first Tensors or lists of channel-first Tensors. the form of (y_pred, y) is required by the update(). for example: if ignite.engine.state.output is {“pred”: xxx, “label”: xxx, “other”: xxx}, output_transform can be lambda x: (x[“pred”], x[“label”]).

  • save_details (bool) – whether to save metric computation details per image, for example: surface dice of every image. default to True, will save to engine.state.metric_details dict with the metric name as key.

Mean squared error metrics handler

class monai.handlers.MeanSquaredError(output_transform=<function MeanSquaredError.<lambda>>, save_details=True)[source]

Computes Mean Squared Error from full size Tensor and collects average over batch, iterations.

Parameters
  • output_transform (Callable) – callable to extract y_pred and y from ignite.engine.state.output then construct (y_pred, y) pair, where y_pred and y can be batch-first Tensors or lists of channel-first Tensors. the form of (y_pred, y) is required by the update(). for example: if ignite.engine.state.output is {“pred”: xxx, “label”: xxx, “other”: xxx}, output_transform can be lambda x: (x[“pred”], x[“label”]).

  • save_details (bool) – whether to save metric computation details per image, for example: mean squared error of every image. default to True, will save to engine.state.metric_details dict with the metric name as key.

Mean absolute error metrics handler

class monai.handlers.MeanAbsoluteError(output_transform=<function MeanAbsoluteError.<lambda>>, save_details=True)[source]

Computes Mean Absolute Error from full size Tensor and collects average over batch, iterations.

Parameters
  • output_transform (Callable) – transform the ignite.engine.state.output into [y_pred, y] pair.

  • save_details (bool) – whether to save metric computation details per image, for example: mean absolute error of every image. default to True, will save to engine.state.metric_details dict with the metric name as key.

Root mean squared error metrics handler

class monai.handlers.RootMeanSquaredError(output_transform=<function RootMeanSquaredError.<lambda>>, save_details=True)[source]

Computes Root Mean Squared Error from full size Tensor and collects average over batch, iterations.

Parameters
  • output_transform (Callable) – transform the ignite.engine.state.output into [y_pred, y] pair.

  • save_details (bool) – whether to save metric computation details per image, for example: root mean squared error of every image. default to True, will save to engine.state.metric_details dict with the metric name as key.

Peak signal to noise ratio metrics handler

class monai.handlers.PeakSignalToNoiseRatio(max_val, output_transform=<function PeakSignalToNoiseRatio.<lambda>>, save_details=True)[source]

Computes Peak Signal to Noise Ratio from full size Tensor and collects average over batch, iterations.

Parameters
  • max_val (Union[int, float]) – The dynamic range of the images/volumes (i.e., the difference between the maximum and the minimum allowed values e.g. 255 for a uint8 image).

  • output_transform (Callable) – transform the ignite.engine.state.output into [y_pred, y] pair.

  • save_details (bool) – whether to save metric computation details per image, for example: PSNR of every image. default to True, will save to engine.state.metric_details dict with the metric name as key.

  • reduction – {"none", "mean", "sum", "mean_batch", "sum_batch",

Metric logger

class monai.handlers.MetricLogger(loss_transform=<function _get_loss_from_output>, metric_transform=<function MetricLogger.<lambda>>, evaluator=None)[source]

Collect per-iteration metrics and loss value from the attached trainer. This will also collect metric values from a given evaluator object which is expected to perform evaluation at the end of training epochs. This class is useful for collecting loss and metric values in one place for storage with checkpoint savers (state_dict and load_state_dict methods provided as expected by Pytorch and Ignite) and for graphing during training.

Example::

# construct an evaluator saving mean dice metric values in the key “val_mean_dice” evaluator = SupervisedEvaluator(…, key_val_metric={“val_mean_dice”: MeanDice(…)})

# construct the logger and associate with evaluator to extract metric values from logger = MetricLogger(evaluator=evaluator)

# construct the trainer with the logger passed in as a handler so that it logs loss values trainer = SupervisedTrainer(…, train_handlers=[logger, ValidationHandler(1, evaluator)])

# run training, logger.loss will be a list of (iteration, loss) values, logger.metrics a dict with key # “val_mean_dice” storing a list of (iteration, metric) values trainer.run()

Parameters
  • loss_transform (Callable) – Converts the output value from the trainer’s state into a loss value

  • metric_transform (Callable) – Converts the metric value coming from the trainer/evaluator’s state into a storable value

  • evaluator (Optional[Engine]) – Optional evaluator to consume metric results from at the end of its evaluation run

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

attach_evaluator(evaluator)[source]

Attach event handlers to the given evaluator to log metric values from it.

Parameters

evaluator (Engine) – Ignite Engine implementing network evaluation

Return type

None

log_metrics(engine)[source]

Log metrics from the given Engine’s state member.

Parameters

engine (Engine) – Ignite Engine to log from

Return type

None

Segmentation saver

class monai.handlers.SegmentationSaver(output_dir='./', output_postfix='seg', output_ext='.nii.gz', resample=True, mode='nearest', padding_mode=<GridSamplePadMode.BORDER: 'border'>, scale=None, dtype=<class 'numpy.float64'>, output_dtype=<class 'numpy.float32'>, squeeze_end_dims=True, data_root_dir='', batch_transform=<function SegmentationSaver.<lambda>>, output_transform=<function SegmentationSaver.<lambda>>, name=None)[source]

Event handler triggered on completing every iteration to save the segmentation predictions into files. It can extract the input image meta data(filename, affine, original_shape, etc.) and resample the predictions based on the meta data. The name of saved file will be {input_image_name}_{output_postfix}{output_ext}, where the input image name is extracted from the meta data dictionary. If no meta data provided, use index from 0 as the filename prefix. The predictions can be PyTorch Tensor with [B, C, H, W, [D]] shape or a list of Tensor without batch dim.

Parameters
  • output_dir (str) – output image directory.

  • output_postfix (str) – a string appended to all output file names, default to seg.

  • output_ext (str) – output file extension name, available extensions: .nii.gz, .nii, .png.

  • resample (bool) – whether to resample before saving the data array. if saving PNG format image, based on the spatial_shape from metadata. if saving NIfTI format image, based on the original_affine from metadata.

  • mode (Union[GridSampleMode, InterpolateMode, str]) –

    This option is used when resample = True. Defaults to "nearest".

  • padding_mode (Union[GridSamplePadMode, str]) –

    This option is used when resample = True. Defaults to "border".

  • scale (Optional[int]) – {255, 65535} postprocess data by clipping to [0, 1] and scaling [0, 255] (uint8) or [0, 65535] (uint16). Default is None to disable scaling. It’s used for PNG format only.

  • dtype (Union[dtype, type, None]) – data type for resampling computation. Defaults to np.float64 for best precision. If None, use the data type of input data. It’s used for Nifti format only.

  • output_dtype (Union[dtype, type, None]) – data type for saving data. Defaults to np.float32, it’s used for Nifti format only.

  • squeeze_end_dims (bool) – if True, any trailing singleton dimensions will be removed (after the channel has been moved to the end). So if input is (C,H,W,D), this will be altered to (H,W,D,C), and then if C==1, it will be saved as (H,W,D). If D also ==1, it will be saved as (H,W). If false, image will always be saved as (H,W,D,C). it’s used for NIfTI format only.

  • data_root_dir (str) – if not empty, it specifies the beginning parts of the input file’s absolute path. it’s used to compute input_file_rel_path, the relative path to the file from data_root_dir to preserve folder structure when saving in case there are files in different folders with the same file names. for example: input_file_name: /foo/bar/test1/image.nii, output_postfix: seg output_ext: nii.gz output_dir: /output, data_root_dir: /foo/bar, output will be: /output/test1/image/image_seg.nii.gz

  • batch_transform (Callable) – a callable that is used to extract the meta_data dictionary of the input images from ignite.engine.state.batch. the purpose is to extract necessary information from the meta data: filename, affine, original_shape, etc.

  • output_transform (Callable) – a callable that is used to extract the model prediction data from ignite.engine.state.output. the first dimension of its output will be treated as the batch dimension. each item in the batch will be saved individually.

  • name (Optional[str]) – identifier of logging.logger to use, defaulting to engine.logger.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

Training stats handler

class monai.handlers.StatsHandler(epoch_print_logger=None, iteration_print_logger=None, output_transform=<function StatsHandler.<lambda>>, global_epoch_transform=<function StatsHandler.<lambda>>, name=None, tag_name='Loss', key_var_format='{}: {:.4f} ', logger_handler=None)[source]

StatsHandler defines a set of Ignite Event-handlers for all the log printing logics. It’s can be used for any Ignite Engine(trainer, validator and evaluator). And it can support logging for epoch level and iteration level with pre-defined loggers.

Default behaviors:
  • When EPOCH_COMPLETED, logs engine.state.metrics using self.logger.

  • When ITERATION_COMPLETED, logs self.output_transform(engine.state.output) using self.logger.

Parameters
  • epoch_print_logger (Optional[Callable[[Engine], Any]]) – customized callable printer for epoch level logging. Must accept parameter “engine”, use default printer if None.

  • iteration_print_logger (Optional[Callable[[Engine], Any]]) – customized callable printer for iteration level logging. Must accept parameter “engine”, use default printer if None.

  • output_transform (Callable) – a callable that is used to transform the ignite.engine.state.output into a scalar to print, or a dictionary of {key: scalar}. In the latter case, the output string will be formatted as key: value. By default this value logging happens when every iteration completed. The default behavior is to print loss from output[0] as output is a decollated list and we replicated loss value for every item of the decollated list.

  • global_epoch_transform (Callable) – a callable that is used to customize global epoch number. For example, in evaluation, the evaluator engine might want to print synced epoch number with the trainer engine.

  • name (Optional[str]) – identifier of logging.logger to use, defaulting to engine.logger.

  • tag_name (str) – when iteration output is a scalar, tag_name is used to print tag_name: scalar_value to logger. Defaults to 'Loss'.

  • key_var_format (str) – a formatting string to control the output string format of key: value.

  • logger_handler (Optional[Handler]) – add additional handler to handle the stats data: save to file, etc. Add existing python logging handlers: https://docs.python.org/3/library/logging.handlers.html

attach(engine)[source]

Register a set of Ignite Event-Handlers to a specified Ignite engine.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

epoch_completed(engine)[source]

Handler for train or validation/evaluation epoch completed Event. Print epoch level log, default values are from Ignite state.metrics dict.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

exception_raised(engine, e)[source]

Handler for train or validation/evaluation exception raised Event. Print the exception information and traceback. This callback may be skipped because the logic with Ignite can only trigger the first attached handler for EXCEPTION_RAISED event.

Parameters
  • engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

  • e (Exception) – the exception caught in Ignite during engine.run().

Return type

None

iteration_completed(engine)[source]

Handler for train or validation/evaluation iteration completed Event. Print iteration level log, default values are from Ignite state.logs dict.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

Tensorboard handlers

class monai.handlers.TensorBoardHandler(summary_writer=None, log_dir='./runs')[source]

Base class for the handlers to write data into TensorBoard.

Parameters
  • summary_writer (Optional[SummaryWriter]) – user can specify TensorBoard SummaryWriter, default to create a new writer.

  • log_dir (str) – if using default SummaryWriter, write logs to this directory, default is ./runs.

close()[source]

Close the summary writer if created in this TensorBoard handler.

class monai.handlers.TensorBoardStatsHandler(summary_writer=None, log_dir='./runs', epoch_event_writer=None, epoch_interval=1, iteration_event_writer=None, iteration_interval=1, output_transform=<function TensorBoardStatsHandler.<lambda>>, global_epoch_transform=<function TensorBoardStatsHandler.<lambda>>, tag_name='Loss')[source]

TensorBoardStatsHandler defines a set of Ignite Event-handlers for all the TensorBoard logics. It’s can be used for any Ignite Engine(trainer, validator and evaluator). And it can support both epoch level and iteration level with pre-defined TensorBoard event writer. The expected data source is Ignite engine.state.output and engine.state.metrics.

Default behaviors:
  • When EPOCH_COMPLETED, write each dictionary item in engine.state.metrics to TensorBoard.

  • When ITERATION_COMPLETED, write each dictionary item in self.output_transform(engine.state.output) to TensorBoard.

Parameters
  • summary_writer (Optional[SummaryWriter]) – user can specify TensorBoard SummaryWriter, default to create a new writer.

  • log_dir (str) – if using default SummaryWriter, write logs to this directory, default is ./runs.

  • epoch_event_writer (Optional[Callable[[Engine, SummaryWriter], Any]]) – customized callable TensorBoard writer for epoch level. Must accept parameter “engine” and “summary_writer”, use default event writer if None.

  • epoch_interval (int) – the epoch interval at which the epoch_event_writer is called. Defaults to 1.

  • iteration_event_writer (Optional[Callable[[Engine, SummaryWriter], Any]]) – customized callable TensorBoard writer for iteration level. Must accept parameter “engine” and “summary_writer”, use default event writer if None.

  • iteration_interval (int) – the iteration interval at which the iteration_event_writer is called. Defaults to 1.

  • output_transform (Callable) – a callable that is used to transform the ignite.engine.state.output into a scalar to plot, or a dictionary of {key: scalar}. In the latter case, the output string will be formatted as key: value. By default this value plotting happens when every iteration completed. The default behavior is to print loss from output[0] as output is a decollated list and we replicated loss value for every item of the decollated list.

  • global_epoch_transform (Callable) – a callable that is used to customize global epoch number. For example, in evaluation, the evaluator engine might want to use trainer engines epoch number when plotting epoch vs metric curves.

  • tag_name (str) – when iteration output is a scalar, tag_name is used to plot, defaults to 'Loss'.

attach(engine)[source]

Register a set of Ignite Event-Handlers to a specified Ignite engine.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

epoch_completed(engine)[source]

Handler for train or validation/evaluation epoch completed Event. Write epoch level events, default values are from Ignite state.metrics dict.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

iteration_completed(engine)[source]

Handler for train or validation/evaluation iteration completed Event. Write iteration level events, default values are from Ignite state.logs dict.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

class monai.handlers.TensorBoardImageHandler(summary_writer=None, log_dir='./runs', interval=1, epoch_level=True, batch_transform=<function TensorBoardImageHandler.<lambda>>, output_transform=<function TensorBoardImageHandler.<lambda>>, global_iter_transform=<function TensorBoardImageHandler.<lambda>>, index=0, max_channels=1, max_frames=64)[source]

TensorBoardImageHandler is an Ignite Event handler that can visualize images, labels and outputs as 2D/3D images. 2D output (shape in Batch, channel, H, W) will be shown as simple image using the first element in the batch, for 3D to ND output (shape in Batch, channel, H, W, D) input, each of self.max_channels number of images’ last three dimensions will be shown as animated GIF along the last axis (typically Depth).

It can be used for any Ignite Engine (trainer, validator and evaluator). User can easily add it to engine for any expected Event, for example: EPOCH_COMPLETED, ITERATION_COMPLETED. The expected data source is ignite’s engine.state.batch and engine.state.output.

Default behavior:
  • Show y_pred as images (GIF for 3D) on TensorBoard when Event triggered,

  • Need to use batch_transform and output_transform to specify how many images to show and show which channel.

  • Expects batch_transform(engine.state.batch) to return data format: (image[N, channel, …], label[N, channel, …]).

  • Expects output_transform(engine.state.output) to return a torch tensor in format (y_pred[N, channel, …], loss).

Parameters
  • summary_writer (Optional[SummaryWriter]) – user can specify TensorBoard SummaryWriter, default to create a new writer.

  • log_dir (str) – if using default SummaryWriter, write logs to this directory, default is ./runs.

  • interval (int) – plot content from engine.state every N epochs or every N iterations, default is 1.

  • epoch_level (bool) – plot content from engine.state every N epochs or N iterations. True is epoch level, False is iteration level.

  • batch_transform (Callable) – a callable that is used to extract image and label from ignite.engine.state.batch, then construct (image, label) pair. for example: if ignite.engine.state.batch is {“image”: xxx, “label”: xxx, “other”: xxx}, batch_transform can be lambda x: (x[“image”], x[“label”]). will use the result to plot image from result[0][index] and plot label from result[1][index].

  • output_transform (Callable) – a callable that is used to extract the predictions data from ignite.engine.state.output, will use the result to plot output from result[index].

  • global_iter_transform (Callable) – a callable that is used to customize global step number for TensorBoard. For example, in evaluation, the evaluator engine needs to know current epoch from trainer.

  • index (int) – plot which element in a data batch, default is the first element.

  • max_channels (int) – number of channels to plot.

  • max_frames (int) – number of frames for 2D-t plot.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

LR Schedule handler

class monai.handlers.LrScheduleHandler(lr_scheduler, print_lr=True, name=None, epoch_level=True, step_transform=<function LrScheduleHandler.<lambda>>)[source]

Ignite handler to update the Learning Rate based on PyTorch LR scheduler.

Parameters
  • lr_scheduler (Union[_LRScheduler, ReduceLROnPlateau]) – typically, lr_scheduler should be PyTorch lr_scheduler object. If customized version, must have step and get_last_lr methods.

  • print_lr (bool) – whether to print out the latest learning rate with logging.

  • name (Optional[str]) – identifier of logging.logger to use, if None, defaulting to engine.logger.

  • epoch_level (bool) – execute lr_scheduler.step() after every epoch or every iteration. True is epoch level, False is iteration level.

  • step_transform (Callable[[Engine], Any]) – a callable that is used to transform the information from engine to expected input data of lr_scheduler.step() function if necessary.

Raises

TypeError – When step_transform is not callable.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

Validation handler

class monai.handlers.ValidationHandler(interval, validator=None, epoch_level=True)[source]

Attach validator to the trainer engine in Ignite. It can support to execute validation every N epochs or every N iterations.

Parameters
  • interval (int) – do validation every N epochs or every N iterations during training.

  • validator (Optional[Evaluator]) – run the validator when trigger validation, suppose to be Evaluator. if None, should call set_validator() before training.

  • epoch_level (bool) – execute validation every N epochs or N iterations. True is epoch level, False is iteration level.

Raises

TypeError – When validator is not a monai.engines.evaluator.Evaluator.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

set_validator(validator)[source]

Set validator if not setting in the __init__().

SmartCache handler

class monai.handlers.SmartCacheHandler(smartcacher)[source]

Attach SmartCache logic to the engine in Ignite. Mainly include the start, update_cache, and shutdown functions of SmartCacheDataset.

Parameters

smartcacher (SmartCacheDataset) – predefined SmartCacheDataset, will attach it to the engine.

Raises

TypeError – When smartcacher is not a monai.data.SmartCacheDataset.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

completed(engine)[source]

Callback for train or validation/evaluation completed Event. Stop the replacement thread of SmartCacheDataset.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

epoch_completed(engine)[source]

Callback for train or validation/evaluation epoch completed Event. Update cache content with replacement data.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

started(engine)[source]

Callback for train or validation/evaluation started Event. Start the replacement thread of SmartCacheDataset.

Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

Parameter Scheduler handler

class monai.handlers.ParamSchedulerHandler(parameter_setter, value_calculator, vc_kwargs, epoch_level=False, name=None, event=<Events.ITERATION_COMPLETED: 'iteration_completed'>)[source]

General purpose scheduler for parameters values. By default it can schedule in a linear, exponential, step or multistep function. One can also pass Callables to have customized scheduling logic.

Parameters
  • parameter_setter (Callable) – Function that sets the required parameter

  • value_calculator (Union[str,Callable]) – Either a string (‘linear’, ‘exponential’, ‘step’ or ‘multistep’) or Callable for custom logic.

  • vc_kwargs (Dict) – Dictionary that stores the required parameters for the value_calculator.

  • epoch_level (bool) – Whether the the step is based on epoch or iteration. Defaults to False.

  • name (Optional[str]) – Identifier of logging.logger to use, if None, defaulting to engine.logger.

  • event (Optional[str]) – Event to which the handler attaches. Defaults to Events.ITERATION_COMPLETED.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine that is used for training.

Return type

None

EarlyStop handler

class monai.handlers.EarlyStopHandler(patience, score_function, trainer=None, min_delta=0.0, cumulative_delta=False, epoch_level=True)[source]

EarlyStopHandler acts as an Ignite handler to stop training if no improvement after a given number of events. It‘s based on the EarlyStopping handler in ignite.

Parameters
  • patience (int) – number of events to wait if no improvement and then stop the training.

  • score_function (Callable) – It should be a function taking a single argument, an Engine object that the handler attached, can be a trainer or validator, and return a score float. an improvement is considered if the score is higher.

  • trainer (Optional[Engine]) – trainer engine to stop the run if no improvement, if None, must call set_trainer() before training.

  • min_delta (float) – a minimum increase in the score to qualify as an improvement, i.e. an increase of less than or equal to min_delta, will count as no improvement.

  • cumulative_delta (bool) – if True, min_delta defines an increase since the last patience reset, otherwise, it defines an increase after the last event, default to False.

  • epoch_level (bool) – check early stopping for every epoch or every iteration of the attached engine, True is epoch level, False is iteration level, default to epoch level.

Note

If in distributed training and uses loss value of every iteration to detect early stopping, the values may be different in different ranks. User may attach this handler to validator engine to detect validation metrics and stop the training, in this case, the score_function is executed on validator engine and trainer is the trainer engine.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

set_trainer(trainer)[source]

Set trainer to execute early stop if not setting properly in __init__().

GarbageCollector handler

class monai.handlers.GarbageCollector(trigger_event='epoch', log_level=10)[source]

Run garbage collector after each epoch

Parameters
  • trigger_event (str) – the event that trigger a call to this handler. - “epoch”, after completion of each epoch (equivalent of ignite.engine.Events.EPOCH_COMPLETED) - “iteration”, after completion of each iteration (equivalent of ignite.engine.Events.ITERATION_COMPLETED) - any ignite built-in event from ignite.engine.Events. Defaults to “epoch”.

  • log_level (int) – log level (integer) for some garbage collection information as below. Defaults to 10 (DEBUG). - 50 (CRITICAL) - 40 (ERROR) - 30 (WARNING) - 20 (INFO) - 10 (DEBUG) - 0 (NOTSET)

Transform inverter

class monai.handlers.TransformInverter(transform, output_keys='pred', batch_keys='image', meta_keys=None, batch_meta_keys=None, meta_key_postfix='meta_dict', nearest_interp=True, to_tensor=True, device='cpu', post_func=<function TransformInverter.<lambda>>, num_workers=0)[source]

Ignite handler to automatically invert transforms. It takes engine.state.output as the input data and uses the transforms information from engine.state.batch. Expect both engine.state.output and engine.state.batch to be list of dictionaries data. The inverted data is in-place saved back to engine.state.output with key: “{output_key}”. And the inverted meta dict will be stored in engine.state.batch with key: “{meta_keys}” or “{key}_{meta_key_postfix}”.

Parameters
  • transform (InvertibleTransform) – a callable data transform on input data.

  • output_keys (Union[Collection[Hashable], Hashable]) – the key of expected data in ignite.engine.output, invert transforms on it. it also can be a list of keys, will invert transform for each of them. Default to “pred”. it’s in-place operation.

  • batch_keys (Union[Collection[Hashable], Hashable]) – the key of input data in ignite.engine.batch. will get the applied transforms for this input data, then invert them for the expected data with output_keys. It can also be a list of keys, each matches to the output_keys data. default to “image”.

  • meta_keys (Union[Collection[Hashable], Hashable, None]) – explicitly indicate the key for the inverted meta data dictionary. the meta data is a dictionary object which contains: filename, original_shape, etc. it can be a sequence of string, map to the keys. if None, will try to construct meta_keys by {key}_{meta_key_postfix}.

  • batch_meta_keys (Union[Collection[Hashable], Hashable, None]) – the key of the meta data of input data in ignite.engine.batch, will get the affine, data_shape, etc. the meta data is a dictionary object which contains: filename, original_shape, etc. it can be a sequence of string, map to the keys. if None, will try to construct meta_keys by {orig_key}_{meta_key_postfix}. meta data will also be inverted and stored in meta_keys.

  • meta_key_postfix (str) – if orig_meta_keys is None, use {orig_key}_{meta_key_postfix} to to fetch the meta data from dict, if meta_keys is None, use {key}_{meta_key_postfix}. default is meta_dict, the meta data is a dictionary object. For example, to handle orig_key image, read/write affine matrices from the metadata image_meta_dict dictionary’s affine field. the inverted meta dict will be stored with key: “{key}_{meta_key_postfix}”.

  • nearest_interp (Union[bool, Sequence[bool]]) – whether to use nearest interpolation mode when inverting the spatial transforms, default to True. If False, use the same interpolation mode as the original transform. it also can be a list of bool, each matches to the output_keys data.

  • to_tensor (Union[bool, Sequence[bool]]) – whether to convert the inverted data into PyTorch Tensor first, default to True. it also can be a list of bool, each matches to the output_keys data.

  • device (Union[str, device, Sequence[Union[str, device]]]) – if converted to Tensor, move the inverted results to target device before post_func, default to “cpu”, it also can be a list of string or torch.device, each matches to the output_keys data.

  • post_func (Union[Callable, Sequence[Callable]]) – post processing for the inverted data, should be a callable function. it also can be a list of callable, each matches to the output_keys data.

  • num_workers (Optional[int]) – number of workers when run data loader for inverse transforms, default to 0 as only run one iteration and multi-processing may be even slower. Set to None, to use the num_workers of the input transform data loader.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

Post processing

class monai.handlers.PostProcessing(transform)[source]

Ignite handler to execute additional post processing after the post processing in engines. So users can insert other handlers between engine postprocessing and this post processing handler.

Parameters

transform (Callable) – callable function to execute on the engine.state.batch and engine.state.output. can also be composed transforms.

attach(engine)[source]
Parameters

engine (Engine) – Ignite Engine, it can be a trainer, validator or evaluator.

Return type

None

Utilities

monai.handlers.utils.evenly_divisible_all_gather(data)[source]

Utility function for distributed data parallel to pad at first dim to make it evenly divisible and all_gather.

Parameters

data (Tensor) – source tensor to pad and execute all_gather in distributed data parallel.

Note

The input data on different ranks must have exactly same dtype.

Return type

Tensor

monai.handlers.utils.from_engine(keys, first=False)[source]

Utility function to simplify the batch_transform or output_transform args of ignite components when handling dictionary or list of dictionaries(for example: engine.state.batch or engine.state.output). Users only need to set the expected keys, then it will return a callable function to extract data from dictionary and construct a tuple respectively.

If data is a list of dictionaries after decollating, extract expected keys and construct lists respectively, for example, if data is [{“A”: 1, “B”: 2}, {“A”: 3, “B”: 4}], from_engine([“A”, “B”]): ([1, 3], [2, 4]).

It can help avoid a complicated lambda function and make the arg of metrics more straight-forward. For example, set the first key as the prediction and the second key as label to get the expected data from engine.state.output for a metric:

from monai.handlers import MeanDice, from_engine

metric = MeanDice(
    include_background=False,
    output_transform=from_engine(["pred", "label"])
)
Parameters
  • keys (Union[Collection[Hashable], Hashable]) – specified keys to extract data from dictionary or decollated list of dictionaries.

  • first (bool) – whether only extract sepcified keys from the first item if input data is a list of dictionaries, it’s used to extract the scalar data which doesn’t have batch dim and was replicated into every dictionary when decollating, like loss, etc.

monai.handlers.utils.stopping_fn_from_loss()[source]

Returns a stopping function for ignite.handlers.EarlyStopping using the loss value.

monai.handlers.utils.stopping_fn_from_metric(metric_name)[source]

Returns a stopping function for ignite.handlers.EarlyStopping using the given metric name.

monai.handlers.utils.string_list_all_gather(strings)[source]

Utility function for distributed data parallel to all gather a list of strings. Note that if the item in strings is longer than 1024 chars, it will be truncated to 1024: https://pytorch.org/ignite/v0.4.5/distributed.html#ignite.distributed.utils.all_gather.

Parameters

strings (List[str]) – a list of strings to all gather.

Return type

List[str]

monai.handlers.utils.write_metrics_reports(save_dir, images, metrics, metric_details, summary_ops, deli='\\t', output_type='csv')[source]

Utility function to write the metrics into files, contains 3 parts: 1. if metrics dict is not None, write overall metrics into file, every line is a metric name and value pair. 2. if metric_details dict is not None, write raw metric data of every image into file, every line for 1 image. 3. if summary_ops is not None, compute summary based on operations on metric_details and write to file.

Parameters
  • save_dir (str) – directory to save all the metrics reports.

  • images (Optional[Sequence[str]]) – name or path of every input image corresponding to the metric_details data. if None, will use index number as the filename of every input image.

  • metrics (Optional[Dict[str, Union[Tensor, ndarray]]]) – a dictionary of (metric name, metric value) pairs.

  • metric_details (Optional[Dict[str, Union[Tensor, ndarray]]]) – a dictionary of (metric name, metric raw values) pairs, usually, it comes from metrics computation, for example, the raw value can be the mean_dice of every channel of every input image.

  • summary_ops (Union[str, Sequence[str], None]) –

    expected computation operations to generate the summary report. it can be: None, “*” or list of strings, default to None. None - don’t generate summary report for every expected metric_details. “*” - generate summary report for every metric_details with all the supported operations. list of strings - generate summary report for every metric_details with specified operations, they should be within list: [“mean”, “median”, “max”, “min”, “<int>percentile”, “std”, “notnans”]. the number in “<int>percentile” should be [0, 100], like: “15percentile”. default: “90percentile”. for more details, please check: https://numpy.org/doc/stable/reference/generated/numpy.nanpercentile.html. note that: for the overall summary, it computes nanmean of all classes for each image first, then compute summary. example of the generated summary report:

    class    mean    median    max    5percentile 95percentile  notnans
    class0  6.0000   6.0000   7.0000   5.1000      6.9000       2.0000
    class1  6.0000   6.0000   6.0000   6.0000      6.0000       1.0000
    mean    6.2500   6.2500   7.0000   5.5750      6.9250       2.0000
    

  • deli (str) – the delimiter character in the file, default to ” “.

  • output_type (str) – expected output file type, supported types: [“csv”], default to “csv”.