EnginesΒΆ
Multi-GPU data parallelΒΆ
-
monai.engines.multi_gpu_supervised_trainer.
create_multigpu_supervised_evaluator
(net, metrics=None, devices=None, non_blocking=False, prepare_batch=<function _prepare_batch>, output_transform=<function _default_eval_transform>, distributed=False)[source]ΒΆ Derived from create_supervised_evaluator in Ignite.
Factory function for creating an evaluator for supervised models.
- Parameters
net (
Module
) β the model to train.metrics (
Optional
[Dict
[str
,Metric
]]) β a map of metric names to Metrics.devices (
Optional
[Sequence
[device
]]) β device(s) type specification (default: None). Applies to both model and batches. None is all devices used, empty list is CPU only.non_blocking (
bool
) β if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.prepare_batch (
Callable
) β function that receives batch, device, non_blocking and outputs tuple of tensors (batch_x, batch_y).output_transform (
Callable
) β function that receives βxβ, βyβ, βy_predβ and returns value to be assigned to engineβs state.output after each iteration. Default is returning (y_pred, y,) which fits output expected by metrics. If you change it you should use output_transform in metrics.distributed (
bool
) β whether convert model to DistributedDataParallel, if have multiple devices, use the first device as output device.
Note
engine.state.output for this engine is defined by output_transform parameter and is a tuple of (batch_pred, batch_y) by default.
- Returns
an evaluator engine with supervised inference function.
- Return type
Engine
-
monai.engines.multi_gpu_supervised_trainer.
create_multigpu_supervised_trainer
(net, optimizer, loss_fn, devices=None, non_blocking=False, prepare_batch=<function _prepare_batch>, output_transform=<function _default_transform>, distributed=False)[source]ΒΆ Derived from create_supervised_trainer in Ignite.
Factory function for creating a trainer for supervised models.
- Parameters
net (
Module
) β the network to train.optimizer (
Optimizer
) β the optimizer to use.loss_fn (
Callable
) β the loss function to use.devices (
Optional
[Sequence
[device
]]) β device(s) type specification (default: None). Applies to both model and batches. None is all devices used, empty list is CPU only.non_blocking (
bool
) β if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.prepare_batch (
Callable
) β function that receives batch, device, non_blocking and outputs tuple of tensors (batch_x, batch_y).output_transform (
Callable
) β function that receives βxβ, βyβ, βy_predβ, βlossβ and returns value to be assigned to engineβs state.output after each iteration. Default is returning loss.item().distributed (
bool
) β whether convert model to DistributedDataParallel, if have multiple devices, use the first device as output device.
- Returns
a trainer engine with supervised update function.
- Return type
Engine
Note
engine.state.output for this engine is defined by output_transform parameter and is the loss of the processed batch by default.
WorkflowsΒΆ
WorkflowΒΆ
-
class
monai.engines.workflow.
Workflow
(device, max_epochs, data_loader, epoch_length=None, non_blocking=False, prepare_batch=<function default_prepare_batch>, iteration_update=None, post_transform=None, key_metric=None, additional_metrics=None, handlers=None, amp=False, event_names=None, event_to_attr=None)[source]ΒΆ Workflow defines the core work process inheriting from Ignite engine. All trainer, validator and evaluator share this same workflow as base class, because they all can be treated as same Ignite engine loops. It initializes all the sharable data in Ignite engine.state. And attach additional processing logics to Ignite engine based on Event-Handler mechanism.
Users should consider to inherit from trainer or evaluator to develop more trainers or evaluators.
- Parameters
device (
device
) β an object representing the device on which to run.max_epochs (
int
) β the total epoch number for engine to run, validator and evaluator have only 1 epoch.data_loader (
Union
[Iterable
,DataLoader
]) β Ignite engine use data_loader to run, must be Iterable or torch.DataLoader.epoch_length (
Optional
[int
]) β number of iterations for one epoch, default to len(data_loader).non_blocking (
bool
) β if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.prepare_batch (
Callable
) β function to parse image and label for every iteration.iteration_update (
Optional
[Callable
]) β the callable function for every iteration, expect to accept engine and batchdata as input parameters. if not provided, use self._iteration() instead.post_transform (
Optional
[Callable
]) β execute additional transformation for the model output data. Typically, several Tensor based transforms composed by Compose.key_metric (
Optional
[Dict
[str
,Metric
]]) β compute metric when every iteration completed, and save average value to engine.state.metrics when epoch completed. key_metric is the main metric to compare and save the checkpoint into files.additional_metrics (
Optional
[Dict
[str
,Metric
]]) β more Ignite metrics that also attach to Ignite Engine.handlers (
Optional
[Sequence
]) β every handler is a set of Ignite Event-Handlers, must have attach function, like: CheckpointHandler, StatsHandler, SegmentationSaver, etc.amp (
bool
) β whether to enable auto-mixed-precision training or inference, default is False.event_names (
Optional
[List
[Union
[str
,EventEnum
]]]) β additional custom ignite events that will register to the engine. new events can be a list of str or ignite.engine.events.EventEnum.event_to_attr (
Optional
[dict
]) β a dictionary to map an event to a state attribute, then add to engine.state. for more details, check: https://github.com/pytorch/ignite/blob/v0.4.4.post1/ignite/engine/engine.py#L160
- Raises
TypeError β When
device
is not atorch.Device
.TypeError β When
data_loader
is not atorch.utils.data.DataLoader
.TypeError β When
key_metric
is not aOptional[dict]
.TypeError β When
additional_metrics
is not aOptional[dict]
.
TrainerΒΆ
-
class
monai.engines.
Trainer
(device, max_epochs, data_loader, epoch_length=None, non_blocking=False, prepare_batch=<function default_prepare_batch>, iteration_update=None, post_transform=None, key_metric=None, additional_metrics=None, handlers=None, amp=False, event_names=None, event_to_attr=None)[source]ΒΆ Base class for all kinds of trainers, inherits from Workflow.
SupervisedTrainerΒΆ
-
class
monai.engines.
SupervisedTrainer
(device, max_epochs, train_data_loader, network, optimizer, loss_function, epoch_length=None, non_blocking=False, prepare_batch=<function default_prepare_batch>, iteration_update=None, inferer=None, post_transform=None, key_train_metric=None, additional_metrics=None, train_handlers=None, amp=False, event_names=None, event_to_attr=None)[source]ΒΆ Standard supervised training method with image and label, inherits from
Trainer
andWorkflow
.- Parameters
device (
device
) β an object representing the device on which to run.max_epochs (
int
) β the total epoch number for trainer to run.train_data_loader (
Union
[Iterable
,DataLoader
]) β Ignite engine use data_loader to run, must be Iterable or torch.DataLoader.network (
Module
) β to train with this network.optimizer (
Optimizer
) β the optimizer associated to the network.loss_function (
Callable
) β the loss function associated to the optimizer.epoch_length (
Optional
[int
]) β number of iterations for one epoch, default to len(train_data_loader).non_blocking (
bool
) β if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.prepare_batch (
Callable
) β function to parse image and label for current iteration.iteration_update (
Optional
[Callable
]) β the callable function for every iteration, expect to accept engine and batchdata as input parameters. if not provided, use self._iteration() instead.inferer (
Optional
[Inferer
]) β inference method that execute model forward on input data, like: SlidingWindow, etc.post_transform (
Optional
[Transform
]) β execute additional transformation for the model output data. Typically, several Tensor based transforms composed by Compose.key_train_metric (
Optional
[Dict
[str
,Metric
]]) β compute metric when every iteration completed, and save average value to engine.state.metrics when epoch completed. key_train_metric is the main metric to compare and save the checkpoint into files.additional_metrics (
Optional
[Dict
[str
,Metric
]]) β more Ignite metrics that also attach to Ignite Engine.train_handlers (
Optional
[Sequence
]) β every handler is a set of Ignite Event-Handlers, must have attach function, like: CheckpointHandler, StatsHandler, SegmentationSaver, etc.amp (
bool
) β whether to enable auto-mixed-precision training, default is False.event_names (
Optional
[List
[Union
[str
,EventEnum
]]]) β additional custom ignite events that will register to the engine. new events can be a list of str or ignite.engine.events.EventEnum.event_to_attr (
Optional
[dict
]) β a dictionary to map an event to a state attribute, then add to engine.state. for more details, check: https://github.com/pytorch/ignite/blob/v0.4.4.post1/ignite/engine/engine.py#L160
GanTrainerΒΆ
-
class
monai.engines.
GanTrainer
(device, max_epochs, train_data_loader, g_network, g_optimizer, g_loss_function, d_network, d_optimizer, d_loss_function, epoch_length=None, g_inferer=None, d_inferer=None, d_train_steps=1, latent_shape=64, non_blocking=False, d_prepare_batch=<function default_prepare_batch>, g_prepare_batch=<function default_make_latent>, g_update_latents=True, iteration_update=None, post_transform=None, key_train_metric=None, additional_metrics=None, train_handlers=None)[source]ΒΆ Generative adversarial network training based on Goodfellow et al. 2014 https://arxiv.org/abs/1406.266, inherits from
Trainer
andWorkflow
.- Training Loop: for each batch of data size m
Generate m fakes from random latent codes.
Update discriminator with these fakes and current batch reals, repeated d_train_steps times.
If g_update_latents, generate m fakes from new random latent codes.
Update generator with these fakes using discriminator feedback.
- Parameters
device (
device
) β an object representing the device on which to run.max_epochs (
int
) β the total epoch number for engine to run.train_data_loader (
DataLoader
) β Core ignite engines uses DataLoader for training loop batchdata.g_network (
Module
) β generator (G) network architecture.g_optimizer (
Optimizer
) β G optimizer function.g_loss_function (
Callable
) β G loss function for optimizer.d_network (
Module
) β discriminator (D) network architecture.d_optimizer (
Optimizer
) β D optimizer function.d_loss_function (
Callable
) β D loss function for optimizer.epoch_length (
Optional
[int
]) β number of iterations for one epoch, default to len(train_data_loader).g_inferer (
Optional
[Inferer
]) β inference method to execute G model forward. Defaults toSimpleInferer()
.d_inferer (
Optional
[Inferer
]) β inference method to execute D model forward. Defaults toSimpleInferer()
.d_train_steps (
int
) β number of times to update D with real data minibatch. Defaults to1
.latent_shape (
int
) β size of G input latent code. Defaults to64
.non_blocking (
bool
) β if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.d_prepare_batch (
Callable
) β callback function to prepare batchdata for D inferer. Defaults to returnGanKeys.REALS
in batchdata dict.g_prepare_batch (
Callable
) β callback function to create batch of latent input for G inferer. Defaults to return random latents.g_update_latents (
bool
) β Calculate G loss with new latent codes. Defaults toTrue
.iteration_update (
Optional
[Callable
]) β the callable function for every iteration, expect to accept engine and batchdata as input parameters. if not provided, use self._iteration() instead.post_transform (
Optional
[Transform
]) β execute additional transformation for the model output data. Typically, several Tensor based transforms composed by Compose.key_train_metric (
Optional
[Dict
[str
,Metric
]]) β compute metric when every iteration completed, and save average value to engine.state.metrics when epoch completed. key_train_metric is the main metric to compare and save the checkpoint into files.additional_metrics (
Optional
[Dict
[str
,Metric
]]) β more Ignite metrics that also attach to Ignite Engine.train_handlers (
Optional
[Sequence
]) β every handler is a set of Ignite Event-Handlers, must have attach function, like: CheckpointHandler, StatsHandler, SegmentationSaver, etc.
EvaluatorΒΆ
-
class
monai.engines.
Evaluator
(device, val_data_loader, epoch_length=None, non_blocking=False, prepare_batch=<function default_prepare_batch>, iteration_update=None, post_transform=None, key_val_metric=None, additional_metrics=None, val_handlers=None, amp=False, mode=<ForwardMode.EVAL: 'eval'>, event_names=None, event_to_attr=None)[source]ΒΆ Base class for all kinds of evaluators, inherits from Workflow.
- Parameters
device (
device
) β an object representing the device on which to run.val_data_loader (
Union
[Iterable
,DataLoader
]) β Ignite engine use data_loader to run, must be Iterable or torch.DataLoader.epoch_length (
Optional
[int
]) β number of iterations for one epoch, default to len(val_data_loader).non_blocking (
bool
) β if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.prepare_batch (
Callable
) β function to parse image and label for current iteration.iteration_update (
Optional
[Callable
]) β the callable function for every iteration, expect to accept engine and batchdata as input parameters. if not provided, use self._iteration() instead.post_transform (
Optional
[Transform
]) β execute additional transformation for the model output data. Typically, several Tensor based transforms composed by Compose.key_val_metric (
Optional
[Dict
[str
,Metric
]]) β compute metric when every iteration completed, and save average value to engine.state.metrics when epoch completed. key_val_metric is the main metric to compare and save the checkpoint into files.additional_metrics (
Optional
[Dict
[str
,Metric
]]) β more Ignite metrics that also attach to Ignite Engine.val_handlers (
Optional
[Sequence
]) β every handler is a set of Ignite Event-Handlers, must have attach function, like: CheckpointHandler, StatsHandler, SegmentationSaver, etc.amp (
bool
) β whether to enable auto-mixed-precision evaluation, default is False.mode (
Union
[ForwardMode
,str
]) β model forward mode during evaluation, should be βevalβ or βtrainβ, which maps to model.eval() or model.train(), default to βevalβ.event_names (
Optional
[List
[Union
[str
,EventEnum
]]]) β additional custom ignite events that will register to the engine. new events can be a list of str or ignite.engine.events.EventEnum.event_to_attr (
Optional
[dict
]) β a dictionary to map an event to a state attribute, then add to engine.state. for more details, check: https://github.com/pytorch/ignite/blob/v0.4.4.post1/ignite/engine/engine.py#L160
SupervisedEvaluatorΒΆ
-
class
monai.engines.
SupervisedEvaluator
(device, val_data_loader, network, epoch_length=None, non_blocking=False, prepare_batch=<function default_prepare_batch>, iteration_update=None, inferer=None, post_transform=None, key_val_metric=None, additional_metrics=None, val_handlers=None, amp=False, mode=<ForwardMode.EVAL: 'eval'>, event_names=None, event_to_attr=None)[source]ΒΆ Standard supervised evaluation method with image and label(optional), inherits from evaluator and Workflow.
- Parameters
device (
device
) β an object representing the device on which to run.val_data_loader (
Union
[Iterable
,DataLoader
]) β Ignite engine use data_loader to run, must be Iterable, typically be torch.DataLoader.network (
Module
) β use the network to run model forward.epoch_length (
Optional
[int
]) β number of iterations for one epoch, default to len(val_data_loader).non_blocking (
bool
) β if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.prepare_batch (
Callable
) β function to parse image and label for current iteration.iteration_update (
Optional
[Callable
]) β the callable function for every iteration, expect to accept engine and batchdata as input parameters. if not provided, use self._iteration() instead.inferer (
Optional
[Inferer
]) β inference method that execute model forward on input data, like: SlidingWindow, etc.post_transform (
Optional
[Transform
]) β execute additional transformation for the model output data. Typically, several Tensor based transforms composed by Compose.key_val_metric (
Optional
[Dict
[str
,Metric
]]) β compute metric when every iteration completed, and save average value to engine.state.metrics when epoch completed. key_val_metric is the main metric to compare and save the checkpoint into files.additional_metrics (
Optional
[Dict
[str
,Metric
]]) β more Ignite metrics that also attach to Ignite Engine.val_handlers (
Optional
[Sequence
]) β every handler is a set of Ignite Event-Handlers, must have attach function, like: CheckpointHandler, StatsHandler, SegmentationSaver, etc.amp (
bool
) β whether to enable auto-mixed-precision evaluation, default is False.mode (
Union
[ForwardMode
,str
]) β model forward mode during evaluation, should be βevalβ or βtrainβ, which maps to model.eval() or model.train(), default to βevalβ.event_names (
Optional
[List
[Union
[str
,EventEnum
]]]) β additional custom ignite events that will register to the engine. new events can be a list of str or ignite.engine.events.EventEnum.event_to_attr (
Optional
[dict
]) β a dictionary to map an event to a state attribute, then add to engine.state. for more details, check: https://github.com/pytorch/ignite/blob/v0.4.4.post1/ignite/engine/engine.py#L160
EnsembleEvaluatorΒΆ
-
class
monai.engines.
EnsembleEvaluator
(device, val_data_loader, networks, pred_keys, epoch_length=None, non_blocking=False, prepare_batch=<function default_prepare_batch>, iteration_update=None, inferer=None, post_transform=None, key_val_metric=None, additional_metrics=None, val_handlers=None, amp=False, mode=<ForwardMode.EVAL: 'eval'>, event_names=None, event_to_attr=None)[source]ΒΆ Ensemble evaluation for multiple models, inherits from evaluator and Workflow. It accepts a list of models for inference and outputs a list of predictions for further operations.
- Parameters
device (
device
) β an object representing the device on which to run.val_data_loader (
Union
[Iterable
,DataLoader
]) β Ignite engine use data_loader to run, must be Iterable, typically be torch.DataLoader.epoch_length (
Optional
[int
]) β number of iterations for one epoch, default to len(val_data_loader).networks (
Sequence
[Module
]) β use the networks to run model forward in order.pred_keys (
Sequence
[str
]) β the keys to store every prediction data. the length must exactly match the number of networks.non_blocking (
bool
) β if True and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.prepare_batch (
Callable
) β function to parse image and label for current iteration.iteration_update (
Optional
[Callable
]) β the callable function for every iteration, expect to accept engine and batchdata as input parameters. if not provided, use self._iteration() instead.inferer (
Optional
[Inferer
]) β inference method that execute model forward on input data, like: SlidingWindow, etc.post_transform (
Optional
[Transform
]) β execute additional transformation for the model output data. Typically, several Tensor based transforms composed by Compose.key_val_metric (
Optional
[Dict
[str
,Metric
]]) β compute metric when every iteration completed, and save average value to engine.state.metrics when epoch completed. key_val_metric is the main metric to compare and save the checkpoint into files.additional_metrics (
Optional
[Dict
[str
,Metric
]]) β more Ignite metrics that also attach to Ignite Engine.val_handlers (
Optional
[Sequence
]) β every handler is a set of Ignite Event-Handlers, must have attach function, like: CheckpointHandler, StatsHandler, SegmentationSaver, etc.amp (
bool
) β whether to enable auto-mixed-precision evaluation, default is False.mode (
Union
[ForwardMode
,str
]) β model forward mode during evaluation, should be βevalβ or βtrainβ, which maps to model.eval() or model.train(), default to βevalβ.event_names (
Optional
[List
[Union
[str
,EventEnum
]]]) β additional custom ignite events that will register to the engine. new events can be a list of str or ignite.engine.events.EventEnum.event_to_attr (
Optional
[dict
]) β a dictionary to map an event to a state attribute, then add to engine.state. for more details, check: https://github.com/pytorch/ignite/blob/v0.4.4.post1/ignite/engine/engine.py#L160