Deploying a MedNIST Classifier App with MONAI Deploy App SDK

This tutorial demos the process of packaging up a trained model using MONAI Deploy App SDK into an artifact which can be run as a local program performing inference, a workflow job doing the same, and a Docker containerized workflow execution.

In this tutorial, we will train a MedNIST classifier like the MONAI tutorial here and then implement & package the inference application, executing the application locally.

Train a MedNIST classifier model with MONAI Core

Setup environment

# Install necessary packages for MONAI Core
!python -c "import monai" || pip install -q "monai[pillow, tqdm]"
!python -c "import ignite" || pip install -q "monai[ignite]"
!python -c "import gdown" || pip install -q "monai[gdown]"
!python -c "import pydicom" || pip install -q "pydicom>=1.4.2"
!python -c "import highdicom" || pip install -q "highdicom>=0.18.2"  # for the use of DICOM Writer operators

# Install MONAI Deploy App SDK package
!python -c "import monai.deploy" || pip install -q "monai-deploy-app-sdk"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'ignite'

Setup imports

# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#     http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os
import shutil
import tempfile
import glob
import PIL.Image
import torch
import numpy as np

from ignite.engine import Events

from monai.apps import download_and_extract
from monai.config import print_config
from monai.networks.nets import DenseNet121
from monai.engines import SupervisedTrainer
from monai.transforms import (
    EnsureChannelFirst,
    Compose,
    LoadImage,
    RandFlip,
    RandRotate,
    RandZoom,
    ScaleIntensity,
    EnsureType,
)
from monai.utils import set_determinism

set_determinism(seed=0)

print_config()
MONAI version: 1.3.0
Numpy version: 1.24.4
Pytorch version: 2.1.1+cu121
MONAI flags: HAS_EXT = False, USE_COMPILED = False, USE_META_DICT = False
MONAI rev id: 865972f7a791bf7b42efbcd87c8402bd865b329e
MONAI __file__: /home/<username>/src/monai-deploy-app-sdk/.venv/lib/python3.8/site-packages/monai/__init__.py

Optional dependencies:
Pytorch Ignite version: 0.4.11
ITK version: NOT INSTALLED or UNKNOWN VERSION.
Nibabel version: 5.1.0
scikit-image version: 0.21.0
scipy version: 1.10.1
Pillow version: 10.0.1
Tensorboard version: NOT INSTALLED or UNKNOWN VERSION.
gdown version: 4.7.1
TorchVision version: NOT INSTALLED or UNKNOWN VERSION.
tqdm version: 4.66.1
lmdb version: NOT INSTALLED or UNKNOWN VERSION.
psutil version: 5.9.6
pandas version: NOT INSTALLED or UNKNOWN VERSION.
einops version: NOT INSTALLED or UNKNOWN VERSION.
transformers version: NOT INSTALLED or UNKNOWN VERSION.
mlflow version: NOT INSTALLED or UNKNOWN VERSION.
pynrrd version: NOT INSTALLED or UNKNOWN VERSION.
clearml version: NOT INSTALLED or UNKNOWN VERSION.

For details about installing the optional dependencies, please visit:
    https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies

Download dataset

The MedNIST dataset was gathered from several sets from TCIA, the RSNA Bone Age Challenge(https://www.rsna.org/education/ai-resources-and-training/ai-image-challenge/rsna-pediatric-bone-age-challenge-2017), and the NIH Chest X-ray dataset.

The dataset is kindly made available by Dr. Bradley J. Erickson M.D., Ph.D. (Department of Radiology, Mayo Clinic) under the Creative Commons CC BY-SA 4.0 license.

If you use the MedNIST dataset, please acknowledge the source.

directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)

resource = "https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE"
md5 = "0bc7306e7427e00ad1c5526a6677552d"

compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
    download_and_extract(resource, compressed_file, root_dir, md5)
/tmp/tmpjh72rafb
Downloading...
From (uriginal): https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE
From (redirected): https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE&confirm=t&uuid=8946f974-8b80-4bd3-8696-ac8716b357ed
To: /tmp/tmpv4hps2d5/MedNIST.tar.gz
100%|██████████| 61.8M/61.8M [00:02<00:00, 26.6MB/s]
2023-11-15 19:10:32,776 - INFO - Downloaded: /tmp/tmpjh72rafb/MedNIST.tar.gz
2023-11-15 19:10:32,883 - INFO - Verified 'MedNIST.tar.gz', md5: 0bc7306e7427e00ad1c5526a6677552d.
2023-11-15 19:10:32,884 - INFO - Writing into directory: /tmp/tmpjh72rafb.

subdirs = sorted(glob.glob(f"{data_dir}/*/"))

class_names = [os.path.basename(sd[:-1]) for sd in subdirs]
image_files = [glob.glob(f"{sb}/*") for sb in subdirs]

image_files_list = sum(image_files, [])
image_class = sum(([i] * len(f) for i, f in enumerate(image_files)), [])
image_width, image_height = PIL.Image.open(image_files_list[0]).size

print(f"Label names: {class_names}")
print(f"Label counts: {list(map(len, image_files))}")
print(f"Total image count: {len(image_class)}")
print(f"Image dimensions: {image_width} x {image_height}")
Label names: ['AbdomenCT', 'BreastMRI', 'CXR', 'ChestCT', 'Hand', 'HeadCT']
Label counts: [10000, 8954, 10000, 10000, 10000, 10000]
Total image count: 58954
Image dimensions: 64 x 64

Setup and train

Here we’ll create a transform sequence and train the network, omitting validation and testing since we know this does indeed work and it’s not needed here:

train_transforms = Compose(
    [
        LoadImage(image_only=True),
        EnsureChannelFirst(channel_dim="no_channel"),
        ScaleIntensity(),
        RandRotate(range_x=np.pi / 12, prob=0.5, keep_size=True),
        RandFlip(spatial_axis=0, prob=0.5),
        RandZoom(min_zoom=0.9, max_zoom=1.1, prob=0.5),
        EnsureType(),
    ]
)
class MedNISTDataset(torch.utils.data.Dataset):
    def __init__(self, image_files, labels, transforms):
        self.image_files = image_files
        self.labels = labels
        self.transforms = transforms

    def __len__(self):
        return len(self.image_files)

    def __getitem__(self, index):
        return self.transforms(self.image_files[index]), self.labels[index]


# just one dataset and loader, we won't bother with validation or testing 
train_ds = MedNISTDataset(image_files_list, image_class, train_transforms)
train_loader = torch.utils.data.DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
net = DenseNet121(spatial_dims=2, in_channels=1, out_channels=len(class_names)).to(device)
loss_function = torch.nn.CrossEntropyLoss()
opt = torch.optim.Adam(net.parameters(), 1e-5)
max_epochs = 5
def _prepare_batch(batch, device, non_blocking):
    return tuple(b.to(device) for b in batch)


trainer = SupervisedTrainer(device, max_epochs, train_loader, net, opt, loss_function, prepare_batch=_prepare_batch)


@trainer.on(Events.EPOCH_COMPLETED)
def _print_loss(engine):
    print(f"Epoch {engine.state.epoch}/{engine.state.max_epochs} Loss: {engine.state.output[0]['loss']}")


trainer.run()
Epoch 1/5 Loss: 0.1891738623380661
Epoch 2/5 Loss: 0.06714393198490143
Epoch 3/5 Loss: 0.028867393732070923
Epoch 4/5 Loss: 0.0186357069760561
Epoch 5/5 Loss: 0.0193067267537117

The network will be saved out here as a Torchscript object named classifier.zip

torch.jit.script(net).save("classifier.zip")

Implementing and Packaging Application with MONAI Deploy App SDK

Based on the Torchscript model(classifier.zip), we will implement an application that process an input Jpeg image and write the prediction(classification) result as JSON file(output.json).

Creating Operators and connecting them in Application class

We used the following train transforms as pre-transforms during the training.

Train transforms used in training
 1train_transforms = Compose(
 2    [
 3        LoadImage(image_only=True),
 4        EnsureChannelFirst(channel_dim="no_channel"),
 5        ScaleIntensity(),
 6        RandRotate(range_x=np.pi / 12, prob=0.5, keep_size=True),
 7        RandFlip(spatial_axis=0, prob=0.5),
 8        RandZoom(min_zoom=0.9, max_zoom=1.1, prob=0.5),
 9        EnsureType(),
10    ]
11)

RandRotate, RandFlip, and RandZoom transforms are used only for training and those are not necessary during the inference.

In our inference application, we will define two operators:

  1. LoadPILOperator - Load a JPEG image from the input path and pass the loaded image object to the next operator.

    • This Operator does similar job with LoadImage(image_only=True) transform in train_transforms, but handles only one image.

    • Input: a file path (Path)

    • Output: an image object in memory (Image)

  2. MedNISTClassifierOperator - Pre-transform the given image by using MONAI’s Compose class, feed to the Torchscript model (classifier.zip), and write the prediction into JSON file(output.json)

    • Pre-transforms consist of three transforms – EnsureChannelFirst, ScaleIntensity, and EnsureType.

    • Input: an image object in memory (Image)

    • Output: a folder path that the prediction result(output.json) would be written (DataPath)

The workflow of the application would look like this.

%%{init: {"theme": "base", "themeVariables": { "fontSize": "16px"}} }%% classDiagram direction LR LoadPILOperator --|> MedNISTClassifierOperator : image...image class LoadPILOperator { <in>image : DISK image(out) IN_MEMORY } class MedNISTClassifierOperator { <in>image : IN_MEMORY output(out) DISK }

Set up environment variables

Before proceeding to the application building and packaging, we first need to set the well-known environment variables, because the application parses them for the input, output, and model folders. Defaults are used if these environment variable are absent.

Set the environment variables corresponding to the extracted data path.

input_folder = "input"
output_foler = "output"
models_folder = "models"

# Choose a file as test input
test_input_path = image_files[0][0]
!rm -rf {input_folder} && mkdir -p {input_folder} && cp {test_input_path} {input_folder} && ls {input_folder}
# Need to copy the model file to its own clean subfolder for pacakging, to workaround an issue in the Packager
!rm -rf {models_folder} && mkdir -p {models_folder}/model && cp classifier.zip {models_folder}/model && ls {models_folder}/model

%env HOLOSCAN_INPUT_PATH {input_folder}
%env HOLOSCAN_OUTPUT_PATH {output_foler}
%env HOLOSCAN_MODEL_PATH {models_folder}
001420.jpeg
classifier.zip
env: HOLOSCAN_INPUT_PATH=input
env: HOLOSCAN_OUTPUT_PATH=output
env: HOLOSCAN_MODEL_PATH=models

Setup imports

Let’s import necessary classes/decorators and define MEDNIST_CLASSES.

import logging
import os
from pathlib import Path
from typing import Optional

import torch

from monai.deploy.conditions import CountCondition
from monai.deploy.core import AppContext, Application, ConditionType, Fragment, Image, Operator, OperatorSpec
from monai.deploy.operators.dicom_text_sr_writer_operator import DICOMTextSRWriterOperator, EquipmentInfo, ModelInfo
from monai.transforms import EnsureChannelFirst, Compose, EnsureType, ScaleIntensity

MEDNIST_CLASSES = ["AbdomenCT", "BreastMRI", "CXR", "ChestCT", "Hand", "HeadCT"]

Creating Operator classes

LoadPILOperator
class LoadPILOperator(Operator):
    """Load image from the given input (DataPath) and set numpy array to the output (Image)."""

    DEFAULT_INPUT_FOLDER = Path.cwd() / "input"
    DEFAULT_OUTPUT_NAME = "image"

    # For now, need to have the input folder as an instance attribute, set on init.
    # If dynamically changing the input folder, per compute, then use a (optional) input port to convey the
    # value of the input folder, which is then emitted by a upstream operator.
    def __init__(
        self,
        fragment: Fragment,
        *args,
        input_folder: Path = DEFAULT_INPUT_FOLDER,
        output_name: str = DEFAULT_OUTPUT_NAME,
        **kwargs,
    ):
        """Creates an loader object with the input folder and the output port name overrides as needed.

        Args:
            fragment (Fragment): An instance of the Application class which is derived from Fragment.
            input_folder (Path): Folder from which to load input file(s).
                                 Defaults to `input` in the current working directory.
            output_name (str): Name of the output port, which is an image object. Defaults to `image`.
        """

        self._logger = logging.getLogger("{}.{}".format(__name__, type(self).__name__))
        self.input_path = input_folder
        self.index = 0
        self.output_name_image = (
            output_name.strip() if output_name and len(output_name.strip()) > 0 else LoadPILOperator.DEFAULT_OUTPUT_NAME
        )

        super().__init__(fragment, *args, **kwargs)

    def setup(self, spec: OperatorSpec):
        """Set up the named input and output port(s)"""
        spec.output(self.output_name_image)

    def compute(self, op_input, op_output, context):
        import numpy as np
        from PIL import Image as PILImage

        # Input path is stored in the object attribute, but could change to use a named port if need be.
        input_path = self.input_path
        if input_path.is_dir():
            input_path = next(self.input_path.glob("*.*"))  # take the first file

        image = PILImage.open(input_path)
        image = image.convert("L")  # convert to greyscale image
        image_arr = np.asarray(image)

        output_image = Image(image_arr)  # create Image domain object with a numpy array
        op_output.emit(output_image, self.output_name_image)  # cannot omit the name even if single output.
MedNISTClassifierOperator
class MedNISTClassifierOperator(Operator):
    """Classifies the given image and returns the class name.

    Named inputs:
        image: Image object for which to generate the classification.
        output_folder: Optional, the path to save the results JSON file, overridingthe the one set on __init__

    Named output:
        result_text: The classification results in text.
    """

    DEFAULT_OUTPUT_FOLDER = Path.cwd() / "classification_results"
    # For testing the app directly, the model should be at the following path.
    MODEL_LOCAL_PATH = Path(os.environ.get("HOLOSCAN_MODEL_PATH", Path.cwd() / "model/model.ts"))

    def __init__(
        self,
        frament: Fragment,
        *args,
        app_context: AppContext,
        model_name: Optional[str] = "",
        model_path: Path = MODEL_LOCAL_PATH,
        output_folder: Path = DEFAULT_OUTPUT_FOLDER,
        **kwargs,
    ):
        """Creates an instance with the reference back to the containing application/fragment.

        fragment (Fragment): An instance of the Application class which is derived from Fragment.
        model_name (str, optional): Name of the model. Default to "" for single model app.
        model_path (Path): Path to the model file. Defaults to model/models.ts of current working dir.
        output_folder (Path, optional): output folder for saving the classification results JSON file.
        """

        # the names used for the model inference input and output
        self._input_dataset_key = "image"
        self._pred_dataset_key = "pred"

        # The names used for the operator input and output
        self.input_name_image = "image"
        self.output_name_result = "result_text"

        # The name of the optional input port for passing data to override the output folder path.
        self.input_name_output_folder = "output_folder"

        # The output folder set on the object can be overriden at each compute by data in the optional named input
        self.output_folder = output_folder

        # Need the name when there are multiple models loaded
        self._model_name = model_name.strip() if isinstance(model_name, str) else ""
        # Need the path to load the models when they are not loaded in the execution context
        self.model_path = model_path
        self.app_context = app_context
        self.model = self._get_model(self.app_context, self.model_path, self._model_name)

        # This needs to be at the end of the constructor.
        super().__init__(frament, *args, **kwargs)

    def _get_model(self, app_context: AppContext, model_path: Path, model_name: str):
        """Load the model with the given name from context or model path

        Args:
            app_context (AppContext): The application context object holding the model(s)
            model_path (Path): The path to the model file, as a backup to load model directly
            model_name (str): The name of the model, when multiples are loaded in the context
        """

        if app_context.models:
            # `app_context.models.get(model_name)` returns a model instance if exists.
            # If model_name is not specified and only one model exists, it returns that model.
            model = app_context.models.get(model_name)
        else:
            model = torch.jit.load(
                MedNISTClassifierOperator.MODEL_LOCAL_PATH,
                map_location=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
            )

        return model

    def setup(self, spec: OperatorSpec):
        """Set up the operator named input and named output, both are in-memory objects."""

        spec.input(self.input_name_image)
        spec.input(self.input_name_output_folder).condition(ConditionType.NONE)  # Optional for overriding.
        spec.output(self.output_name_result).condition(ConditionType.NONE)  # Not forcing a downstream receiver.

    @property
    def transform(self):
        return Compose([EnsureChannelFirst(channel_dim="no_channel"), ScaleIntensity(), EnsureType()])

    def compute(self, op_input, op_output, context):
        import json

        import torch

        img = op_input.receive(self.input_name_image).asnumpy()  # (64, 64), uint8. Input validation can be added.
        image_tensor = self.transform(img)  # (1, 64, 64), torch.float64
        image_tensor = image_tensor[None].float()  # (1, 1, 64, 64), torch.float32

        device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        image_tensor = image_tensor.to(device)

        with torch.no_grad():
            outputs = self.model(image_tensor)

        _, output_classes = outputs.max(dim=1)

        result = MEDNIST_CLASSES[output_classes[0]]  # get the class name
        print(result)
        op_output.emit(result, self.output_name_result)

        # Get output folder, with value in optional input port overriding the obj attribute
        output_folder_on_compute = op_input.receive(self.input_name_output_folder) or self.output_folder
        Path.mkdir(output_folder_on_compute, parents=True, exist_ok=True)  # Let exception bubble up if raised.
        output_path = output_folder_on_compute / "output.json"
        with open(output_path, "w") as fp:
            json.dump(result, fp)

Creating Application class

Our application class would look like below.

It defines App class inheriting Application class.

LoadPILOperator is connected to MedNISTClassifierOperator by using self.add_flow() in compose() method of App.

class App(Application):
    """Application class for the MedNIST classifier."""

    def compose(self):
        app_context = Application.init_app_context({})  # Do not pass argv in Jupyter Notebook
        app_input_path = Path(app_context.input_path)
        app_output_path = Path(app_context.output_path)
        model_path = Path(app_context.model_path)
        load_pil_op = LoadPILOperator(self, CountCondition(self, 1), input_folder=app_input_path, name="pil_loader_op")
        classifier_op = MedNISTClassifierOperator(
            self, app_context=app_context, output_folder=app_output_path, model_path=model_path, name="classifier_op"
        )

        my_model_info = ModelInfo("MONAI WG Trainer", "MEDNIST Classifier", "0.1", "xyz")
        my_equipment = EquipmentInfo(manufacturer="MOANI Deploy App SDK", manufacturer_model="DICOM SR Writer")
        my_special_tags = {"SeriesDescription": "Not for clinical use. The result is for research use only."}
        dicom_sr_operator = DICOMTextSRWriterOperator(
            self,
            copy_tags=False,
            model_info=my_model_info,
            equipment_info=my_equipment,
            custom_tags=my_special_tags,
            output_folder=app_output_path,
        )

        self.add_flow(load_pil_op, classifier_op, {("image", "image")})
        self.add_flow(classifier_op, dicom_sr_operator, {("result_text", "text")})

Executing app locally

The test input file file, output path, and model have been prepared, and the paths set in the environment variables, so we can go ahead and execute the application Jupyter notebook with a clean output folder.

!rm -rf $HOLOSCAN_OUTPUT_PATH
app = App().run()
[2023-11-15 19:18:17,922] [INFO] (root) - Parsed args: Namespace(argv=[], input=None, log_level=None, model=None, output=None, workdir=None)
[2023-11-15 19:18:17,941] [INFO] (root) - AppContext object: AppContext(input_path=input, output_path=output, model_path=models, workdir=)
[info] [gxf_executor.cpp:210] Creating context
[info] [gxf_executor.cpp:1595] Loading extensions from configs...
[info] [gxf_executor.cpp:1741] Activating Graph...
[info] [gxf_executor.cpp:1771] Running Graph...
[info] [gxf_executor.cpp:1773] Waiting for completion...
[info] [gxf_executor.cpp:1774] Graph execution waiting. Fragment: 
[info] [greedy_scheduler.cpp:190] Scheduling 3 entities
/home/mqin/src/monai-deploy-app-sdk/.venv/lib/python3.8/site-packages/monai/data/meta_tensor.py:116: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)
  return torch.as_tensor(x, *args, **_kwargs).as_subclass(cls)
/home/mqin/src/monai-deploy-app-sdk/.venv/lib/python3.8/site-packages/pydicom/valuerep.py:443: UserWarning: Invalid value for VR UI: 'xyz'. Please see <https://dicom.nema.org/medical/dicom/current/output/html/part05.html#table_6.2-1> for allowed values for each VR.
  warnings.warn(msg)
[2023-11-15 19:18:19,246] [INFO] (root) - Finished writing DICOM instance to file output/1.2.826.0.1.3680043.8.498.89399783846974532553567524226806601923.dcm
[2023-11-15 19:18:19,249] [INFO] (monai.deploy.operators.dicom_text_sr_writer_operator.DICOMTextSRWriterOperator) - DICOM SOP instance saved in output/1.2.826.0.1.3680043.8.498.89399783846974532553567524226806601923.dcm
AbdomenCT
[info] [greedy_scheduler.cpp:369] Scheduler stopped: Some entities are waiting for execution, but there are no periodic or async entities to get out of the deadlock.
[info] [greedy_scheduler.cpp:398] Scheduler finished.
[info] [gxf_executor.cpp:1783] Graph execution deactivating. Fragment: 
[info] [gxf_executor.cpp:1784] Deactivating Graph...
[info] [gxf_executor.cpp:1787] Graph execution finished. Fragment: 
[info] [gxf_executor.cpp:229] Destroying context
!cat $HOLOSCAN_OUTPUT_PATH/output.json
"AbdomenCT"

Once the application is verified inside Jupyter notebook, we can write the whole application as a file(mednist_classifier_monaideploy.py) by concatenating code above, then add the following lines:

if __name__ == "__main__":
    App()

The above lines are needed to execute the application code by using python interpreter.

# Create an application folder
!mkdir -p mednist_app
!rm -rf mednist_app/*
%%writefile mednist_app/mednist_classifier_monaideploy.py

# Copyright 2021-2023 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#     http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import logging
import os
from pathlib import Path
from typing import Optional

import torch

from monai.deploy.conditions import CountCondition
from monai.deploy.core import AppContext, Application, ConditionType, Fragment, Image, Operator, OperatorSpec
from monai.deploy.operators.dicom_text_sr_writer_operator import DICOMTextSRWriterOperator, EquipmentInfo, ModelInfo
from monai.transforms import EnsureChannelFirst, Compose, EnsureType, ScaleIntensity

MEDNIST_CLASSES = ["AbdomenCT", "BreastMRI", "CXR", "ChestCT", "Hand", "HeadCT"]


# @md.env(pip_packages=["pillow"])
class LoadPILOperator(Operator):
    """Load image from the given input (DataPath) and set numpy array to the output (Image)."""

    DEFAULT_INPUT_FOLDER = Path.cwd() / "input"
    DEFAULT_OUTPUT_NAME = "image"

    # For now, need to have the input folder as an instance attribute, set on init.
    # If dynamically changing the input folder, per compute, then use a (optional) input port to convey the
    # value of the input folder, which is then emitted by a upstream operator.
    def __init__(
        self,
        fragment: Fragment,
        *args,
        input_folder: Path = DEFAULT_INPUT_FOLDER,
        output_name: str = DEFAULT_OUTPUT_NAME,
        **kwargs,
    ):
        """Creates an loader object with the input folder and the output port name overrides as needed.

        Args:
            fragment (Fragment): An instance of the Application class which is derived from Fragment.
            input_folder (Path): Folder from which to load input file(s).
                                 Defaults to `input` in the current working directory.
            output_name (str): Name of the output port, which is an image object. Defaults to `image`.
        """

        self._logger = logging.getLogger("{}.{}".format(__name__, type(self).__name__))
        self.input_path = input_folder
        self.index = 0
        self.output_name_image = (
            output_name.strip() if output_name and len(output_name.strip()) > 0 else LoadPILOperator.DEFAULT_OUTPUT_NAME
        )

        super().__init__(fragment, *args, **kwargs)

    def setup(self, spec: OperatorSpec):
        """Set up the named input and output port(s)"""
        spec.output(self.output_name_image)

    def compute(self, op_input, op_output, context):
        import numpy as np
        from PIL import Image as PILImage

        # Input path is stored in the object attribute, but could change to use a named port if need be.
        input_path = self.input_path
        if input_path.is_dir():
            input_path = next(self.input_path.glob("*.*"))  # take the first file

        image = PILImage.open(input_path)
        image = image.convert("L")  # convert to greyscale image
        image_arr = np.asarray(image)

        output_image = Image(image_arr)  # create Image domain object with a numpy array
        op_output.emit(output_image, self.output_name_image)  # cannot omit the name even if single output.


# @md.env(pip_packages=["monai"])
class MedNISTClassifierOperator(Operator):
    """Classifies the given image and returns the class name.

    Named inputs:
        image: Image object for which to generate the classification.
        output_folder: Optional, the path to save the results JSON file, overridingthe the one set on __init__

    Named output:
        result_text: The classification results in text.
    """

    DEFAULT_OUTPUT_FOLDER = Path.cwd() / "classification_results"
    # For testing the app directly, the model should be at the following path.
    MODEL_LOCAL_PATH = Path(os.environ.get("HOLOSCAN_MODEL_PATH", Path.cwd() / "model/model.ts"))

    def __init__(
        self,
        frament: Fragment,
        *args,
        app_context: AppContext,
        model_name: Optional[str] = "",
        model_path: Path = MODEL_LOCAL_PATH,
        output_folder: Path = DEFAULT_OUTPUT_FOLDER,
        **kwargs,
    ):
        """Creates an instance with the reference back to the containing application/fragment.

        fragment (Fragment): An instance of the Application class which is derived from Fragment.
        model_name (str, optional): Name of the model. Default to "" for single model app.
        model_path (Path): Path to the model file. Defaults to model/models.ts of current working dir.
        output_folder (Path, optional): output folder for saving the classification results JSON file.
        """

        # the names used for the model inference input and output
        self._input_dataset_key = "image"
        self._pred_dataset_key = "pred"

        # The names used for the operator input and output
        self.input_name_image = "image"
        self.output_name_result = "result_text"

        # The name of the optional input port for passing data to override the output folder path.
        self.input_name_output_folder = "output_folder"

        # The output folder set on the object can be overriden at each compute by data in the optional named input
        self.output_folder = output_folder

        # Need the name when there are multiple models loaded
        self._model_name = model_name.strip() if isinstance(model_name, str) else ""
        # Need the path to load the models when they are not loaded in the execution context
        self.model_path = model_path
        self.app_context = app_context
        self.model = self._get_model(self.app_context, self.model_path, self._model_name)

        # This needs to be at the end of the constructor.
        super().__init__(frament, *args, **kwargs)

    def _get_model(self, app_context: AppContext, model_path: Path, model_name: str):
        """Load the model with the given name from context or model path

        Args:
            app_context (AppContext): The application context object holding the model(s)
            model_path (Path): The path to the model file, as a backup to load model directly
            model_name (str): The name of the model, when multiples are loaded in the context
        """

        if app_context.models:
            # `app_context.models.get(model_name)` returns a model instance if exists.
            # If model_name is not specified and only one model exists, it returns that model.
            model = app_context.models.get(model_name)
        else:
            model = torch.jit.load(
                MedNISTClassifierOperator.MODEL_LOCAL_PATH,
                map_location=torch.device("cuda" if torch.cuda.is_available() else "cpu"),
            )

        return model

    def setup(self, spec: OperatorSpec):
        """Set up the operator named input and named output, both are in-memory objects."""

        spec.input(self.input_name_image)
        spec.input(self.input_name_output_folder).condition(ConditionType.NONE)  # Optional for overriding.
        spec.output(self.output_name_result).condition(ConditionType.NONE)  # Not forcing a downstream receiver.

    @property
    def transform(self):
        return Compose([EnsureChannelFirst(channel_dim="no_channel"), ScaleIntensity(), EnsureType()])

    def compute(self, op_input, op_output, context):
        import json

        import torch

        img = op_input.receive(self.input_name_image).asnumpy()  # (64, 64), uint8. Input validation can be added.
        image_tensor = self.transform(img)  # (1, 64, 64), torch.float64
        image_tensor = image_tensor[None].float()  # (1, 1, 64, 64), torch.float32

        device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        image_tensor = image_tensor.to(device)

        with torch.no_grad():
            outputs = self.model(image_tensor)

        _, output_classes = outputs.max(dim=1)

        result = MEDNIST_CLASSES[output_classes[0]]  # get the class name
        print(result)
        op_output.emit(result, self.output_name_result)

        # Get output folder, with value in optional input port overriding the obj attribute
        output_folder_on_compute = op_input.receive(self.input_name_output_folder) or self.output_folder
        Path.mkdir(output_folder_on_compute, parents=True, exist_ok=True)  # Let exception bubble up if raised.
        output_path = output_folder_on_compute / "output.json"
        with open(output_path, "w") as fp:
            json.dump(result, fp)


# @md.resource(cpu=1, gpu=1, memory="1Gi")
class App(Application):
    """Application class for the MedNIST classifier."""

    def compose(self):
        app_context = AppContext({})  # Let it figure out all the attributes without overriding
        app_input_path = Path(app_context.input_path)
        app_output_path = Path(app_context.output_path)
        model_path = Path(app_context.model_path)
        load_pil_op = LoadPILOperator(self, CountCondition(self, 1), input_folder=app_input_path, name="pil_loader_op")
        classifier_op = MedNISTClassifierOperator(
            self, app_context=app_context, output_folder=app_output_path, model_path=model_path, name="classifier_op"
        )

        my_model_info = ModelInfo("MONAI WG Trainer", "MEDNIST Classifier", "0.1", "xyz")
        my_equipment = EquipmentInfo(manufacturer="MOANI Deploy App SDK", manufacturer_model="DICOM SR Writer")
        my_special_tags = {"SeriesDescription": "Not for clinical use. The result is for research use only."}
        dicom_sr_operator = DICOMTextSRWriterOperator(
            self,
            copy_tags=False,
            model_info=my_model_info,
            equipment_info=my_equipment,
            custom_tags=my_special_tags,
            output_folder=app_output_path,
        )

        self.add_flow(load_pil_op, classifier_op, {("image", "image")})
        self.add_flow(classifier_op, dicom_sr_operator, {("result_text", "text")})


if __name__ == "__main__":
    App().run()
Writing mednist_app/mednist_classifier_monaideploy.py

This time, let’s execute the app in the command line.

!python "mednist_app/mednist_classifier_monaideploy.py"
[info] [gxf_executor.cpp:210] Creating context
[info] [gxf_executor.cpp:1595] Loading extensions from configs...
[info] [gxf_executor.cpp:1741] Activating Graph...
[info] [gxf_executor.cpp:1771] Running Graph...
[info] [gxf_executor.cpp:1773] Waiting for completion...
[info] [gxf_executor.cpp:1774] Graph execution waiting. Fragment: 
[info] [greedy_scheduler.cpp:190] Scheduling 3 entities
/home/mqin/src/monai-deploy-app-sdk/.venv/lib/python3.8/site-packages/monai/data/meta_tensor.py:116: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)
  return torch.as_tensor(x, *args, **_kwargs).as_subclass(cls)
AbdomenCT
/home/mqin/src/monai-deploy-app-sdk/.venv/lib/python3.8/site-packages/pydicom/valuerep.py:443: UserWarning: Invalid value for VR UI: 'xyz'. Please see <https://dicom.nema.org/medical/dicom/current/output/html/part05.html#table_6.2-1> for allowed values for each VR.
  warnings.warn(msg)
[info] [greedy_scheduler.cpp:369] Scheduler stopped: Some entities are waiting for execution, but there are no periodic or async entities to get out of the deadlock.
[info] [greedy_scheduler.cpp:398] Scheduler finished.
[info] [gxf_executor.cpp:1783] Graph execution deactivating. Fragment: 
[info] [gxf_executor.cpp:1784] Deactivating Graph...
[info] [gxf_executor.cpp:1787] Graph execution finished. Fragment: 
[info] [gxf_executor.cpp:229] Destroying context
!cat $HOLOSCAN_OUTPUT_PATH/output.json
"AbdomenCT"

Packaging app

Let’s package the app with MONAI Application Packager.

In this version of the App SDK, we need to write out the configuration yaml file as well as the package requirements file, in the application folder.

%%writefile mednist_app/app.yaml
%YAML 1.2
---
application:
  title: MONAI Deploy App Package - MedNIST Classifier App
  version: 1.0
  inputFormats: ["file"]
  outputFormats: ["file"]

resources:
  cpu: 1
  gpu: 1
  memory: 1Gi
  gpuMemory: 1Gi
Writing mednist_app/app.yaml
%%writefile mednist_app/requirements.txt
monai>=1.2.0
Pillow>=8.4.0
pydicom>=2.3.0
highdicom>=0.18.2
SimpleITK>=2.0.0
setuptools>=59.5.0 # for pkg_resources
Writing mednist_app/requirements.txt
tag_prefix = "mednist_app"

!monai-deploy package "mednist_app/mednist_classifier_monaideploy.py" -m {models_folder} -c "mednist_app/app.yaml" -t {tag_prefix}:1.0 --platform x64-workstation -l DEBUG
[2023-11-15 19:18:32,397] [INFO] (packager.parameters) - Application: /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/mednist_app/mednist_classifier_monaideploy.py
[2023-11-15 19:18:32,397] [INFO] (packager.parameters) - Detected application type: Python File
[2023-11-15 19:18:32,397] [INFO] (packager) - Scanning for models in {models_path}...
[2023-11-15 19:18:32,397] [DEBUG] (packager) - Model model=/home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/models/model added.
[2023-11-15 19:18:32,397] [INFO] (packager) - Reading application configuration from /home/mqin/src/monai-deploy-app-sdk/notebooks/tutorials/mednist_app/app.yaml...
[2023-11-15 19:18:32,398] [INFO] (packager) - Generating app.json...
[2023-11-15 19:18:32,399] [INFO] (packager) - Generating pkg.json...
[2023-11-15 19:18:32,400] [DEBUG] (common) - 
=============== Begin app.json ===============
{
    "apiVersion": "1.0.0",
    "command": "[\"python3\", \"/opt/holoscan/app/mednist_classifier_monaideploy.py\"]",
    "environment": {
        "HOLOSCAN_APPLICATION": "/opt/holoscan/app",
        "HOLOSCAN_INPUT_PATH": "input/",
        "HOLOSCAN_OUTPUT_PATH": "output/",
        "HOLOSCAN_WORKDIR": "/var/holoscan",
        "HOLOSCAN_MODEL_PATH": "/opt/holoscan/models",
        "HOLOSCAN_CONFIG_PATH": "/var/holoscan/app.yaml",
        "HOLOSCAN_APP_MANIFEST_PATH": "/etc/holoscan/app.json",
        "HOLOSCAN_PKG_MANIFEST_PATH": "/etc/holoscan/pkg.json",
        "HOLOSCAN_DOCS_PATH": "/opt/holoscan/docs",
        "HOLOSCAN_LOGS_PATH": "/var/holoscan/logs"
    },
    "input": {
        "path": "input/",
        "formats": null
    },
    "liveness": null,
    "output": {
        "path": "output/",
        "formats": null
    },
    "readiness": null,
    "sdk": "monai-deploy",
    "sdkVersion": "0.6.0",
    "timeout": 0,
    "version": 1.0,
    "workingDirectory": "/var/holoscan"
}
================ End app.json ================
                 
[2023-11-15 19:18:32,400] [DEBUG] (common) - 
=============== Begin pkg.json ===============
{
    "apiVersion": "1.0.0",
    "applicationRoot": "/opt/holoscan/app",
    "modelRoot": "/opt/holoscan/models",
    "models": {
        "model": "/opt/holoscan/models"
    },
    "resources": {
        "cpu": 1,
        "gpu": 1,
        "memory": "1Gi",
        "gpuMemory": "1Gi"
    },
    "version": 1.0
}
================ End pkg.json ================
                 
[2023-11-15 19:18:32,429] [DEBUG] (packager.builder) - 
========== Begin Dockerfile ==========


FROM nvcr.io/nvidia/clara-holoscan/holoscan:v0.6.0-dgpu

ENV DEBIAN_FRONTEND=noninteractive
ENV TERM=xterm-256color

ARG UNAME
ARG UID
ARG GID

RUN mkdir -p /etc/holoscan/ \
        && mkdir -p /opt/holoscan/ \
        && mkdir -p /var/holoscan \
        && mkdir -p /opt/holoscan/app \
        && mkdir -p /var/holoscan/input \
        && mkdir -p /var/holoscan/output

LABEL base="nvcr.io/nvidia/clara-holoscan/holoscan:v0.6.0-dgpu"
LABEL tag="mednist_app:1.0"
LABEL org.opencontainers.image.title="MONAI Deploy App Package - MedNIST Classifier App"
LABEL org.opencontainers.image.version="1.0"
LABEL org.nvidia.holoscan="0.6.0"

ENV HOLOSCAN_ENABLE_HEALTH_CHECK=true
ENV HOLOSCAN_INPUT_PATH=/var/holoscan/input
ENV HOLOSCAN_OUTPUT_PATH=/var/holoscan/output
ENV HOLOSCAN_WORKDIR=/var/holoscan
ENV HOLOSCAN_APPLICATION=/opt/holoscan/app
ENV HOLOSCAN_TIMEOUT=0
ENV HOLOSCAN_MODEL_PATH=/opt/holoscan/models
ENV HOLOSCAN_DOCS_PATH=/opt/holoscan/docs
ENV HOLOSCAN_CONFIG_PATH=/var/holoscan/app.yaml
ENV HOLOSCAN_APP_MANIFEST_PATH=/etc/holoscan/app.json
ENV HOLOSCAN_PKG_MANIFEST_PATH=/etc/holoscan/pkg.json
ENV HOLOSCAN_LOGS_PATH=/var/holoscan/logs
ENV PATH=/root/.local/bin:/opt/nvidia/holoscan:$PATH
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/libtorch/1.13.1/lib/:/opt/nvidia/holoscan/lib

RUN apt-get update \
    && apt-get install -y curl jq \
    && rm -rf /var/lib/apt/lists/*

ENV PYTHONPATH="/opt/holoscan/app:$PYTHONPATH"



RUN groupadd -g $GID $UNAME
RUN useradd -rm -d /home/$UNAME -s /bin/bash -g $GID -G sudo -u $UID $UNAME
RUN chown -R holoscan /var/holoscan 
RUN chown -R holoscan /var/holoscan/input 
RUN chown -R holoscan /var/holoscan/output 

# Set the working directory
WORKDIR /var/holoscan

# Copy HAP/MAP tool script
COPY ./tools /var/holoscan/tools
RUN chmod +x /var/holoscan/tools


# Copy gRPC health probe

USER $UNAME

ENV PATH=/root/.local/bin:/home/holoscan/.local/bin:/opt/nvidia/holoscan:$PATH

COPY ./pip/requirements.txt /tmp/requirements.txt

RUN pip install --upgrade pip
RUN pip install --no-cache-dir --user -r /tmp/requirements.txt

# Install Holoscan from PyPI org
RUN pip install holoscan==0.6.0


# Install MONAI Deploy from PyPI org
RUN pip install monai-deploy-app-sdk==0.6.0




COPY ./models  /opt/holoscan/models

COPY ./map/app.json /etc/holoscan/app.json
COPY ./app.config /var/holoscan/app.yaml
COPY ./map/pkg.json /etc/holoscan/pkg.json

COPY ./app /opt/holoscan/app

ENTRYPOINT ["/var/holoscan/tools"]
=========== End Dockerfile ===========

[2023-11-15 19:18:32,429] [INFO] (packager.builder) - 
===============================================================================
Building image for:                 x64-workstation
    Architecture:                   linux/amd64
    Base Image:                     nvcr.io/nvidia/clara-holoscan/holoscan:v0.6.0-dgpu
    Build Image:                    N/A  
    Cache:                          Enabled
    Configuration:                  dgpu
    Holoiscan SDK Package:          pypi.org
    MONAI Deploy App SDK Package:   pypi.org
    gRPC Health Probe:              N/A
    SDK Version:                    0.6.0
    SDK:                            monai-deploy
    Tag:                            mednist_app-x64-workstation-dgpu-linux-amd64:1.0
    
[2023-11-15 19:18:32,690] [INFO] (common) - Using existing Docker BuildKit builder `holoscan_app_builder`
[2023-11-15 19:18:32,691] [DEBUG] (packager.builder) - Building Holoscan Application Package: tag=mednist_app-x64-workstation-dgpu-linux-amd64:1.0
#0 building with "holoscan_app_builder" instance using docker-container driver

#1 [internal] load .dockerignore
#1 transferring context: 1.79kB done
#1 DONE 0.1s

#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 2.49kB done
#2 DONE 0.1s

#3 [internal] load metadata for nvcr.io/nvidia/clara-holoscan/holoscan:v0.6.0-dgpu
#3 DONE 0.4s

#4 [internal] load build context
#4 DONE 0.0s

#5 importing cache manifest from local:12435489437730595250
#5 DONE 0.0s

#6 importing cache manifest from nvcr.io/nvidia/clara-holoscan/holoscan:v0.6.0-dgpu
#6 DONE 0.7s

#7 [ 1/21] FROM nvcr.io/nvidia/clara-holoscan/holoscan:v0.6.0-dgpu@sha256:9653f80f241fd542f25afbcbcf7a0d02ed7e5941c79763e69def5b1e6d9fb7bc
#7 resolve nvcr.io/nvidia/clara-holoscan/holoscan:v0.6.0-dgpu@sha256:9653f80f241fd542f25afbcbcf7a0d02ed7e5941c79763e69def5b1e6d9fb7bc 0.1s done
#7 DONE 0.1s

#4 [internal] load build context
#4 transferring context: 28.62MB 0.2s done
#4 DONE 0.3s

#8 [ 6/21] RUN chown -R holoscan /var/holoscan
#8 CACHED

#9 [ 7/21] RUN chown -R holoscan /var/holoscan/input
#9 CACHED

#10 [ 9/21] WORKDIR /var/holoscan
#10 CACHED

#11 [ 2/21] RUN mkdir -p /etc/holoscan/         && mkdir -p /opt/holoscan/         && mkdir -p /var/holoscan         && mkdir -p /opt/holoscan/app         && mkdir -p /var/holoscan/input         && mkdir -p /var/holoscan/output
#11 CACHED

#12 [ 3/21] RUN apt-get update     && apt-get install -y curl jq     && rm -rf /var/lib/apt/lists/*
#12 CACHED

#13 [ 5/21] RUN useradd -rm -d /home/holoscan -s /bin/bash -g 1000 -G sudo -u 1000 holoscan
#13 CACHED

#14 [10/21] COPY ./tools /var/holoscan/tools
#14 CACHED

#15 [ 4/21] RUN groupadd -g 1000 holoscan
#15 CACHED

#16 [ 8/21] RUN chown -R holoscan /var/holoscan/output
#16 CACHED

#17 [11/21] RUN chmod +x /var/holoscan/tools
#17 CACHED

#18 [12/21] COPY ./pip/requirements.txt /tmp/requirements.txt
#18 DONE 0.4s

#19 [13/21] RUN pip install --upgrade pip
#19 1.132 Defaulting to user installation because normal site-packages is not writeable
#19 1.214 Requirement already satisfied: pip in /usr/local/lib/python3.8/dist-packages (22.0.4)
#19 1.417 Collecting pip
#19 1.467   Downloading pip-23.3.1-py3-none-any.whl (2.1 MB)
#19 1.538      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 32.7 MB/s eta 0:00:00
#19 1.658 Installing collected packages: pip
#19 2.774 Successfully installed pip-23.3.1
#19 2.906 WARNING: You are using pip version 22.0.4; however, version 23.3.1 is available.
#19 2.906 You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.
#19 DONE 3.1s

#20 [14/21] RUN pip install --no-cache-dir --user -r /tmp/requirements.txt
#20 0.781 Collecting monai>=1.2.0 (from -r /tmp/requirements.txt (line 1))
#20 0.810   Downloading monai-1.3.0-202310121228-py3-none-any.whl.metadata (10 kB)
#20 1.152 Collecting Pillow>=8.4.0 (from -r /tmp/requirements.txt (line 2))
#20 1.165   Downloading Pillow-10.1.0-cp38-cp38-manylinux_2_28_x86_64.whl.metadata (9.5 kB)
#20 1.216 Collecting pydicom>=2.3.0 (from -r /tmp/requirements.txt (line 3))
#20 1.227   Downloading pydicom-2.4.3-py3-none-any.whl.metadata (7.8 kB)
#20 1.342 Collecting highdicom>=0.18.2 (from -r /tmp/requirements.txt (line 4))
#20 1.351   Downloading highdicom-0.22.0-py3-none-any.whl.metadata (3.8 kB)
#20 1.430 Collecting SimpleITK>=2.0.0 (from -r /tmp/requirements.txt (line 5))
#20 1.439   Downloading SimpleITK-2.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.9 kB)
#20 1.777 Collecting setuptools>=59.5.0 (from -r /tmp/requirements.txt (line 6))
#20 1.785   Downloading setuptools-68.2.2-py3-none-any.whl.metadata (6.3 kB)
#20 1.912 Requirement already satisfied: numpy>=1.20 in /usr/local/lib/python3.8/dist-packages (from monai>=1.2.0->-r /tmp/requirements.txt (line 1)) (1.22.3)
#20 1.972 Collecting torch>=1.9 (from monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 1.983   Downloading torch-2.1.1-cp38-cp38-manylinux1_x86_64.whl.metadata (25 kB)
#20 2.163 Collecting pillow-jpls>=1.0 (from highdicom>=0.18.2->-r /tmp/requirements.txt (line 4))
#20 2.182   Downloading pillow_jpls-1.2.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (340 kB)
#20 2.197      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 340.3/340.3 kB 76.9 MB/s eta 0:00:00
#20 2.400 Collecting filelock (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 2.411   Downloading filelock-3.13.1-py3-none-any.whl.metadata (2.8 kB)
#20 2.415 Requirement already satisfied: typing-extensions in /usr/local/lib/python3.8/dist-packages (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1)) (4.7.1)
#20 2.467 Collecting sympy (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 2.477   Downloading sympy-1.12-py3-none-any.whl (5.7 MB)
#20 2.531      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7/5.7 MB 114.1 MB/s eta 0:00:00
#20 2.617 Collecting networkx (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 2.626   Downloading networkx-3.1-py3-none-any.whl (2.1 MB)
#20 2.652      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 95.7 MB/s eta 0:00:00
#20 2.668 Requirement already satisfied: jinja2 in /usr/local/lib/python3.8/dist-packages (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1)) (3.1.2)
#20 2.721 Collecting fsspec (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 2.730   Downloading fsspec-2023.10.0-py3-none-any.whl.metadata (6.8 kB)
#20 2.760 Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 2.772   Downloading nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)
#20 2.981      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.7/23.7 MB 116.4 MB/s eta 0:00:00
#20 3.068 Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 3.109   Downloading nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)
#20 3.124      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 823.6/823.6 kB 94.7 MB/s eta 0:00:00
#20 3.163 Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 3.176   Downloading nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)
#20 3.309      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.1/14.1 MB 109.7 MB/s eta 0:00:00
#20 3.378 Collecting nvidia-cudnn-cu12==8.9.2.26 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 3.386   Downloading nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
#20 3.416 Collecting nvidia-cublas-cu12==12.1.3.1 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 3.430   Downloading nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)
#20 7.176      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 410.6/410.6 MB 107.3 MB/s eta 0:00:00
#20 8.220 Collecting nvidia-cufft-cu12==11.0.2.54 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 8.233   Downloading nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)
#20 9.379      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 121.6/121.6 MB 109.4 MB/s eta 0:00:00
#20 9.783 Collecting nvidia-curand-cu12==10.3.2.106 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 9.793   Downloading nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)
#20 10.31      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.5/56.5 MB 116.7 MB/s eta 0:00:00
#20 10.48 Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 10.49   Downloading nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)
#20 11.57      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 124.2/124.2 MB 124.0 MB/s eta 0:00:00
#20 11.93 Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 11.94   Downloading nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)
#20 13.68      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 196.0/196.0 MB 112.6 MB/s eta 0:00:00
#20 14.22 Collecting nvidia-nccl-cu12==2.18.1 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 14.23   Downloading nvidia_nccl_cu12-2.18.1-py3-none-manylinux1_x86_64.whl (209.8 MB)
#20 16.25      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 209.8/209.8 MB 110.7 MB/s eta 0:00:00
#20 16.82 Collecting nvidia-nvtx-cu12==12.1.105 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 16.83   Downloading nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)
#20 16.84      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 99.1/99.1 kB 177.1 MB/s eta 0:00:00
#20 16.88 Collecting triton==2.1.0 (from torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 16.89   Downloading triton-2.1.0-0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.3 kB)
#20 16.94 Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 16.95   Downloading nvidia_nvjitlink_cu12-12.3.101-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
#20 17.06 Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from jinja2->torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1)) (2.1.1)
#20 17.16 Collecting mpmath>=0.19 (from sympy->torch>=1.9->monai>=1.2.0->-r /tmp/requirements.txt (line 1))
#20 17.17   Downloading mpmath-1.3.0-py3-none-any.whl (536 kB)
#20 17.18      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 171.3 MB/s eta 0:00:00
#20 17.28 Downloading monai-1.3.0-202310121228-py3-none-any.whl (1.3 MB)
#20 17.30    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 106.8 MB/s eta 0:00:00
#20 17.31 Downloading Pillow-10.1.0-cp38-cp38-manylinux_2_28_x86_64.whl (3.6 MB)
#20 17.36    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.6/3.6 MB 85.2 MB/s eta 0:00:00
#20 17.37 Downloading pydicom-2.4.3-py3-none-any.whl (1.8 MB)
#20 17.40    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 82.5 MB/s eta 0:00:00
#20 17.42 Downloading highdicom-0.22.0-py3-none-any.whl (825 kB)
#20 17.43    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 825.0/825.0 kB 76.9 MB/s eta 0:00:00
#20 17.45 Downloading SimpleITK-2.3.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (52.7 MB)
#20 18.20    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 52.7/52.7 MB 93.6 MB/s eta 0:00:00
#20 18.21 Downloading setuptools-68.2.2-py3-none-any.whl (807 kB)
#20 18.22    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 807.9/807.9 kB 125.5 MB/s eta 0:00:00
#20 18.23 Downloading torch-2.1.1-cp38-cp38-manylinux1_x86_64.whl (670.2 MB)
#20 24.62    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 670.2/670.2 MB 74.1 MB/s eta 0:00:00
#20 24.63 Downloading nvidia_cudnn_cu12-8.9.2.26-py3-none-manylinux1_x86_64.whl (731.7 MB)
#20 31.44    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 731.7/731.7 MB 65.6 MB/s eta 0:00:00
#20 31.46 Downloading triton-2.1.0-0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89.2 MB)
#20 32.91    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.2/89.2 MB 90.3 MB/s eta 0:00:00
#20 32.92 Downloading filelock-3.13.1-py3-none-any.whl (11 kB)
#20 32.93 Downloading fsspec-2023.10.0-py3-none-any.whl (166 kB)
#20 32.94    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 166.4/166.4 kB 69.7 MB/s eta 0:00:00
#20 32.96 Downloading nvidia_nvjitlink_cu12-12.3.101-py3-none-manylinux1_x86_64.whl (20.5 MB)
#20 34.08    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 20.5/20.5 MB 19.1 MB/s eta 0:00:00
#20 38.74 Installing collected packages: SimpleITK, mpmath, sympy, setuptools, pydicom, Pillow, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, networkx, fsspec, filelock, triton, pillow-jpls, nvidia-cusparse-cu12, nvidia-cudnn-cu12, nvidia-cusolver-cu12, highdicom, torch, monai
#20 85.54 Successfully installed Pillow-10.1.0 SimpleITK-2.3.1 filelock-3.13.1 fsspec-2023.10.0 highdicom-0.22.0 monai-1.3.0 mpmath-1.3.0 networkx-3.1 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.18.1 nvidia-nvjitlink-cu12-12.3.101 nvidia-nvtx-cu12-12.1.105 pillow-jpls-1.2.0 pydicom-2.4.3 setuptools-68.2.2 sympy-1.12 torch-2.1.1 triton-2.1.0
#20 DONE 87.6s

#21 [15/21] RUN pip install holoscan==0.6.0
#21 0.757 Defaulting to user installation because normal site-packages is not writeable
#21 1.044 Collecting holoscan==0.6.0
#21 1.084   Downloading holoscan-0.6.0-cp38-cp38-manylinux2014_x86_64.whl.metadata (4.4 kB)
#21 1.124 Requirement already satisfied: cloudpickle~=2.2 in /usr/local/lib/python3.8/dist-packages (from holoscan==0.6.0) (2.2.1)
#21 1.126 Requirement already satisfied: python-on-whales~=0.60 in /usr/local/lib/python3.8/dist-packages (from holoscan==0.6.0) (0.63.0)
#21 1.128 Requirement already satisfied: Jinja2~=3.1 in /usr/local/lib/python3.8/dist-packages (from holoscan==0.6.0) (3.1.2)
#21 1.130 Requirement already satisfied: packaging~=23.1 in /usr/local/lib/python3.8/dist-packages (from holoscan==0.6.0) (23.1)
#21 1.131 Requirement already satisfied: pyyaml~=6.0 in /usr/local/lib/python3.8/dist-packages (from holoscan==0.6.0) (6.0.1)
#21 1.132 Requirement already satisfied: requests~=2.28 in /usr/local/lib/python3.8/dist-packages (from holoscan==0.6.0) (2.31.0)
#21 1.134 Requirement already satisfied: pip>=20.2 in /home/holoscan/.local/lib/python3.8/site-packages (from holoscan==0.6.0) (23.3.1)
#21 1.276 Collecting wheel-axle-runtime<1.0 (from holoscan==0.6.0)
#21 1.285   Downloading wheel_axle_runtime-0.0.5-py3-none-any.whl.metadata (7.7 kB)
#21 1.308 Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from Jinja2~=3.1->holoscan==0.6.0) (2.1.1)
#21 1.319 Requirement already satisfied: pydantic<2,>=1.5 in /usr/local/lib/python3.8/dist-packages (from python-on-whales~=0.60->holoscan==0.6.0) (1.10.12)
#21 1.320 Requirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from python-on-whales~=0.60->holoscan==0.6.0) (4.65.0)
#21 1.321 Requirement already satisfied: typer>=0.4.1 in /usr/local/lib/python3.8/dist-packages (from python-on-whales~=0.60->holoscan==0.6.0) (0.9.0)
#21 1.321 Requirement already satisfied: typing-extensions in /usr/local/lib/python3.8/dist-packages (from python-on-whales~=0.60->holoscan==0.6.0) (4.7.1)
#21 1.332 Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.8/dist-packages (from requests~=2.28->holoscan==0.6.0) (3.2.0)
#21 1.332 Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests~=2.28->holoscan==0.6.0) (3.4)
#21 1.333 Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.8/dist-packages (from requests~=2.28->holoscan==0.6.0) (2.0.4)
#21 1.334 Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.8/dist-packages (from requests~=2.28->holoscan==0.6.0) (2023.7.22)
#21 1.337 Requirement already satisfied: filelock in /home/holoscan/.local/lib/python3.8/site-packages (from wheel-axle-runtime<1.0->holoscan==0.6.0) (3.13.1)
#21 1.381 Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.8/dist-packages (from typer>=0.4.1->python-on-whales~=0.60->holoscan==0.6.0) (8.1.6)
#21 1.444 Downloading holoscan-0.6.0-cp38-cp38-manylinux2014_x86_64.whl (52.8 MB)
#21 2.307    ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 52.8/52.8 MB 27.8 MB/s eta 0:00:00
#21 2.319 Downloading wheel_axle_runtime-0.0.5-py3-none-any.whl (12 kB)
#21 2.826 Installing collected packages: wheel-axle-runtime, holoscan
#21 3.814 Successfully installed holoscan-0.6.0 wheel-axle-runtime-0.0.5
#21 DONE 4.4s

#22 [16/21] RUN pip install monai-deploy-app-sdk==0.6.0
#22 0.661 Defaulting to user installation because normal site-packages is not writeable
#22 0.843 Collecting monai-deploy-app-sdk==0.6.0
#22 0.872   Downloading monai_deploy_app_sdk-0.6.0-py3-none-any.whl (125 kB)
#22 0.895      ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 125.1/125.1 KB 7.3 MB/s eta 0:00:00
#22 0.918 Requirement already satisfied: numpy>=1.21.6 in /usr/local/lib/python3.8/dist-packages (from monai-deploy-app-sdk==0.6.0) (1.22.3)
#22 0.919 Requirement already satisfied: holoscan~=0.6.0 in /home/holoscan/.local/lib/python3.8/site-packages (from monai-deploy-app-sdk==0.6.0) (0.6.0)
#22 0.994 Collecting colorama>=0.4.1
#22 1.002   Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)
#22 1.091 Collecting typeguard>=3.0.0
#22 1.105   Downloading typeguard-4.1.5-py3-none-any.whl (34 kB)
#22 1.130 Requirement already satisfied: cloudpickle~=2.2 in /usr/local/lib/python3.8/dist-packages (from holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (2.2.1)
#22 1.131 Requirement already satisfied: wheel-axle-runtime<1.0 in /home/holoscan/.local/lib/python3.8/site-packages (from holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (0.0.5)
#22 1.132 Requirement already satisfied: Jinja2~=3.1 in /usr/local/lib/python3.8/dist-packages (from holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (3.1.2)
#22 1.133 Requirement already satisfied: packaging~=23.1 in /usr/local/lib/python3.8/dist-packages (from holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (23.1)
#22 1.134 Requirement already satisfied: python-on-whales~=0.60 in /usr/local/lib/python3.8/dist-packages (from holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (0.63.0)
#22 1.135 Requirement already satisfied: pyyaml~=6.0 in /usr/local/lib/python3.8/dist-packages (from holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (6.0.1)
#22 1.136 Requirement already satisfied: pip>=20.2 in /usr/local/lib/python3.8/dist-packages (from holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (22.0.4)
#22 1.137 Requirement already satisfied: requests~=2.28 in /usr/local/lib/python3.8/dist-packages (from holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (2.31.0)
#22 1.151 Requirement already satisfied: typing-extensions>=4.7.0 in /usr/local/lib/python3.8/dist-packages (from typeguard>=3.0.0->monai-deploy-app-sdk==0.6.0) (4.7.1)
#22 1.266 Collecting importlib-metadata>=3.6
#22 1.274   Downloading importlib_metadata-6.8.0-py3-none-any.whl (22 kB)
#22 1.371 Collecting zipp>=0.5
#22 1.378   Downloading zipp-3.17.0-py3-none-any.whl (7.4 kB)
#22 1.393 Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.8/dist-packages (from Jinja2~=3.1->holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (2.1.1)
#22 1.401 Requirement already satisfied: tqdm in /usr/local/lib/python3.8/dist-packages (from python-on-whales~=0.60->holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (4.65.0)
#22 1.402 Requirement already satisfied: pydantic<2,>=1.5 in /usr/local/lib/python3.8/dist-packages (from python-on-whales~=0.60->holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (1.10.12)
#22 1.403 Requirement already satisfied: typer>=0.4.1 in /usr/local/lib/python3.8/dist-packages (from python-on-whales~=0.60->holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (0.9.0)
#22 1.414 Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.8/dist-packages (from requests~=2.28->holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (2023.7.22)
#22 1.415 Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.8/dist-packages (from requests~=2.28->holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (3.2.0)
#22 1.416 Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.8/dist-packages (from requests~=2.28->holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (3.4)
#22 1.417 Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.8/dist-packages (from requests~=2.28->holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (2.0.4)
#22 1.423 Requirement already satisfied: filelock in /home/holoscan/.local/lib/python3.8/site-packages (from wheel-axle-runtime<1.0->holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (3.13.1)
#22 1.464 Requirement already satisfied: click<9.0.0,>=7.1.1 in /usr/local/lib/python3.8/dist-packages (from typer>=0.4.1->python-on-whales~=0.60->holoscan~=0.6.0->monai-deploy-app-sdk==0.6.0) (8.1.6)
#22 1.945 Installing collected packages: zipp, colorama, importlib-metadata, typeguard, monai-deploy-app-sdk
#22 2.185 Successfully installed colorama-0.4.6 importlib-metadata-6.8.0 monai-deploy-app-sdk-0.6.0 typeguard-4.1.5 zipp-3.17.0
#22 2.190 WARNING: You are using pip version 22.0.4; however, version 23.3.1 is available.
#22 2.190 You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command.
#22 DONE 2.4s

#23 [17/21] COPY ./models  /opt/holoscan/models
#23 DONE 0.2s

#24 [18/21] COPY ./map/app.json /etc/holoscan/app.json
#24 DONE 0.1s

#25 [19/21] COPY ./app.config /var/holoscan/app.yaml
#25 DONE 0.1s

#26 [20/21] COPY ./map/pkg.json /etc/holoscan/pkg.json
#26 DONE 0.1s

#27 [21/21] COPY ./app /opt/holoscan/app
#27 DONE 0.1s

#28 exporting to docker image format
#28 exporting layers
#28 exporting layers 157.5s done
#28 exporting manifest sha256:6d3e7548287a6a3abc70110c29982b2b32483515fb249876f50112b03dac40a6 0.0s done
#28 exporting config sha256:69287893ca549aef4897eb7391ab557b1cc5802f1c547ac41b761808e04a7fa4 0.0s done
#28 sending tarball
#28 ...

#29 importing to docker
#29 DONE 90.4s

#28 exporting to docker image format
#28 sending tarball 132.9s done
#28 DONE 290.5s

#30 exporting content cache
#30 preparing build cache for export
#30 writing layer sha256:0709800848b4584780b40e7e81200689870e890c38b54e96b65cd0a3b1942f2d
#30 writing layer sha256:0709800848b4584780b40e7e81200689870e890c38b54e96b65cd0a3b1942f2d done
#30 writing layer sha256:0ce020987cfa5cd1654085af3bb40779634eb3d792c4a4d6059036463ae0040d done
#30 writing layer sha256:0f65089b284381bf795d15b1a186e2a8739ea957106fa526edef0d738e7cda70 done
#30 writing layer sha256:12a47450a9f9cc5d4edab65d0f600dbbe8b23a1663b0b3bb2c481d40e074b580 done
#30 writing layer sha256:1de965777e2e37c7fabe00bdbf3d0203ca83ed30a71a5479c3113fe4fc48c4bb done
#30 writing layer sha256:24b5aa2448e920814dd67d7d3c0169b2cdacb13c4048d74ded3b4317843b13ff done
#30 writing layer sha256:2d42104dbf0a7cc962b791f6ab4f45a803f8a36d296f996aca180cfb2f3e30d0 done
#30 writing layer sha256:2fa1ce4fa3fec6f9723380dc0536b7c361d874add0baaddc4bbf2accac82d2ff done
#30 writing layer sha256:3783d0dc66925772df1dfb27f94eaa99034d14162095ac959cd3963ec714d1f4 0.0s done
#30 writing layer sha256:38794be1b5dc99645feabf89b22cd34fb5bdffb5164ad920e7df94f353efe9c0 done
#30 writing layer sha256:38f963dc57c1e7b68a738fe39ed9f9345df7188111a047e2163a46648d7f1d88 done
#30 writing layer sha256:394546a9b772ece8edef536c1ed208c87a1c39293207cc101fc7d94cc5ff364f
#30 writing layer sha256:394546a9b772ece8edef536c1ed208c87a1c39293207cc101fc7d94cc5ff364f 54.5s done
#30 writing layer sha256:3e7e4c9bc2b136814c20c04feb4eea2b2ecf972e20182d88759931130cfb4181 done
#30 writing layer sha256:3fd77037ad585442cd82d64e337f49a38ddba50432b2a1e563a48401d25c79e6 done
#30 writing layer sha256:40c61fe78b843bfb1e890001a8d40e2dbe8f3e2f0ddb65d10c91147cfb0f1af3
#30 writing layer sha256:40c61fe78b843bfb1e890001a8d40e2dbe8f3e2f0ddb65d10c91147cfb0f1af3 0.1s done
#30 writing layer sha256:41814ed91034b30ac9c44dfc604a4bade6138005ccf682372c02e0bead66dbc0 done
#30 writing layer sha256:45893188359aca643d5918c9932da995364dc62013dfa40c075298b1baabece3 done
#30 writing layer sha256:49bc651b19d9e46715c15c41b7c0daa007e8e25f7d9518f04f0f06592799875a done
#30 writing layer sha256:4c12db5118d8a7d909e4926d69a2192d2b3cd8b110d49c7504a4f701258c1ccc done
#30 writing layer sha256:4cc43a803109d6e9d1fd35495cef9b1257035f5341a2db54f7a1940815b6cc65 done
#30 writing layer sha256:4d32b49e2995210e8937f0898327f196d3fcc52486f0be920e8b2d65f150a7ab done
#30 writing layer sha256:4d6fe980bad9cd7b2c85a478c8033cae3d098a81f7934322fb64658b0c8f9854 done
#30 writing layer sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 done
#30 writing layer sha256:5150182f1ff123399b300ca469e00f6c4d82e1b9b72652fb8ee7eab370245236 done
#30 writing layer sha256:595c38fa102c61c3dda19bdab70dcd26a0e50465b986d022a84fa69023a05d0f done
#30 writing layer sha256:599c7444a380d72214895c595ea8a776b249a23b4bde9c029c5b3b737fd44cf1 0.0s done
#30 writing layer sha256:59d451175f6950740e26d38c322da0ef67cb59da63181eb32996f752ba8a2f17 done
#30 writing layer sha256:5ad1f2004580e415b998124ea394e9d4072a35d70968118c779f307204d6bd17 done
#30 writing layer sha256:62598eafddf023e7f22643485f4321cbd51ff7eee743b970db12454fd3c8c675 done
#30 writing layer sha256:63d7e616a46987136f4cc9eba95db6f6327b4854cfe3c7e20fed6db0c966e380 done
#30 writing layer sha256:689393d5c3926910ebc9e4c6c377ea651c84cf0134a1aa69cadcf309ecef9e02 0.0s done
#30 writing layer sha256:6939d591a6b09b14a437e5cd2d6082a52b6d76bec4f72d960440f097721da34f
#30 writing layer sha256:6939d591a6b09b14a437e5cd2d6082a52b6d76bec4f72d960440f097721da34f done
#30 writing layer sha256:698318e5a60e5e0d48c45bf992f205a9532da567fdfe94bd59be2e192975dd6f done
#30 writing layer sha256:6d907abcbcc8c4fea9f9678d5b7a9a0171b441c35bed212a634d58d27d8fb5cb
#30 writing layer sha256:6d907abcbcc8c4fea9f9678d5b7a9a0171b441c35bed212a634d58d27d8fb5cb 0.4s done
#30 writing layer sha256:6ddc1d0f91833b36aac1c6f0c8cea005c87d94bab132d46cc06d9b060a81cca3 done
#30 writing layer sha256:7073fc2251eff329a82af3e4f73a2b5e75b8fe8c6d744183f08d11f395277e9c 0.0s done
#30 writing layer sha256:74ac1f5a47c0926bff1e997bb99985a09926f43bd0895cb27ceb5fa9e95f8720 done
#30 writing layer sha256:7577973918dd30e764733a352a93f418000bc3181163ca451b2307492c1a6ba9 done
#30 writing layer sha256:7f256c83fad20862afc50cdf843f2b48a9be6bb58f9f17ef9f63e26f047ba31a 0.0s done
#30 writing layer sha256:886c886d8a09d8befb92df75dd461d4f97b77d7cff4144c4223b0d2f6f2c17f2
#30 writing layer sha256:886c886d8a09d8befb92df75dd461d4f97b77d7cff4144c4223b0d2f6f2c17f2 done
#30 writing layer sha256:8a7451db9b4b817b3b33904abddb7041810a4ffe8ed4a034307d45d9ae9b3f2a done
#30 writing layer sha256:8bf04775f408495a1ab7de439b0fc5f981bd282834c6d940f5eb7b865fcb2aa0 0.0s done
#30 writing layer sha256:916f4054c6e7f10de4fd7c08ffc75fa23ebecca4eceb8183cb1023b33b1696c9 done
#30 writing layer sha256:9463aa3f56275af97693df69478a2dc1d171f4e763ca6f7b6f370a35e605c154 done
#30 writing layer sha256:955fd173ed884230c2eded4542d10a97384b408537be6bbb7c4ae09ccd6fb2d0 done
#30 writing layer sha256:9c42a4ee99755f441251e6043b2cbba16e49818a88775e7501ec17e379ce3cfd done
#30 writing layer sha256:9c63be0a86e3dc4168db3814bf464e40996afda0031649d9faa8ff7568c3154f done
#30 writing layer sha256:9e04bda98b05554953459b5edef7b2b14d32f1a00b979a23d04b6eb5c191e66b done
#30 writing layer sha256:a4a0c690bc7da07e592514dccaa26098a387e8457f69095e922b6d73f7852502 done
#30 writing layer sha256:a4aafbc094d78a85bef41036173eb816a53bcd3e2564594a32f542facdf2aba6 done
#30 writing layer sha256:ae36a4d38b76948e39a5957025c984a674d2de18ce162a8caaa536e6f06fccea done
#30 writing layer sha256:b2fa40114a4a0725c81b327df89c0c3ed5c05ca9aa7f1157394d5096cf5460ce done
#30 writing layer sha256:b48a5fafcaba74eb5d7e7665601509e2889285b50a04b5b639a23f8adc818157 done
#30 writing layer sha256:bc094183f34f419fbf8d0d5a76d88f741675287a26603b98896c4161a0218d63
#30 writing layer sha256:bc094183f34f419fbf8d0d5a76d88f741675287a26603b98896c4161a0218d63 1.6s done
#30 writing layer sha256:c86976a083599e36a6441f36f553627194d05ea82bb82a78682e718fe62fccf6
#30 preparing build cache for export 57.9s done
#30 writing layer sha256:c86976a083599e36a6441f36f553627194d05ea82bb82a78682e718fe62fccf6 done
#30 writing layer sha256:cb506fbdedc817e3d074f609e2edbf9655aacd7784610a1bbac52f2d7be25438 done
#30 writing layer sha256:d2a6fe65a1f84edb65b63460a75d1cac1aa48b72789006881b0bcfd54cd01ffd done
#30 writing layer sha256:d8d16d6af76dc7c6b539422a25fdad5efb8ada5a8188069fcd9d113e3b783304 done
#30 writing layer sha256:ddc2ade4f6fe866696cb638c8a102cb644fa842c2ca578392802b3e0e5e3bcb7 done
#30 writing layer sha256:e2cfd7f6244d6f35befa6bda1caa65f1786cecf3f00ef99d7c9a90715ce6a03c done
#30 writing layer sha256:e94a4481e9334ff402bf90628594f64a426672debbdfb55f1290802e52013907 done
#30 writing layer sha256:eaf45e9f32d1f5a9983945a1a9f8dedbb475bc0f578337610e00b4dedec87c20 done
#30 writing layer sha256:eb411bef39c013c9853651e68f00965dbd826d829c4e478884a2886976e9c989 done
#30 writing layer sha256:edfe4a95eb6bd3142aeda941ab871ffcc8c19cf50c33561c210ba8ead2424759 done
#30 writing layer sha256:ef4466d6f927d29d404df9c5af3ef5733c86fa14e008762c90110b963978b1e7 done
#30 writing layer sha256:f346e3ecdf0bee048fa1e3baf1d3128ff0283b903f03e97524944949bd8882e5 done
#30 writing layer sha256:f3f9a00a1ce9aadda250aacb3e66a932676badc5d8519c41517fdf7ea14c13ed done
#30 writing layer sha256:fd849d9bd8889edd43ae38e9f21a912430c8526b2c18f3057a3b2cd74eb27b31 done
#30 writing config sha256:7cfec7bd2b3ff69855a31c8535d3935c07e6028f6e6f6ad8d1ee72dca22a059e 0.0s done
#30 writing manifest sha256:ae24e011466a0d1cad0f5738f6d2871e7cc99b4f959833c08e056e8dadd6f56c 0.0s done
#30 DONE 57.9s
[2023-11-15 19:26:02,805] [INFO] (packager) - Build Summary:

Platform: x64-workstation/dgpu
    Status:     Succeeded
    Docker Tag: mednist_app-x64-workstation-dgpu-linux-amd64:1.0
    Tarball:    None

Note

Building a MONAI Application Package (Docker image) can take time. Use -l DEBUG option if you want to see the progress.

We can see that the Docker image is created.

!docker image ls | grep {tag_prefix}
mednist_app-x64-workstation-dgpu-linux-amd64                                              1.0                        69287893ca54   5 minutes ago   15.6GB

Executing packaged app locally

We can choose to display and export the MAP manifests, but in this example, we will just run the MAP through MONAI Application Runner.

# Clear the output folder and run the MAP. The input is expected to be a folder.
!rm -rf $HOLOSCAN_OUTPUT_PATH
!monai-deploy run -i$HOLOSCAN_INPUT_PATH -o $HOLOSCAN_OUTPUT_PATH mednist_app-x64-workstation-dgpu-linux-amd64:1.0
[2023-11-15 19:26:07,374] [INFO] (runner) - Checking dependencies...
[2023-11-15 19:26:07,375] [INFO] (runner) - --> Verifying if "docker" is installed...

[2023-11-15 19:26:07,375] [INFO] (runner) - --> Verifying if "docker-buildx" is installed...

[2023-11-15 19:26:07,375] [INFO] (runner) - --> Verifying if "mednist_app-x64-workstation-dgpu-linux-amd64:1.0" is available...

[2023-11-15 19:26:07,454] [INFO] (runner) - Reading HAP/MAP manifest...
Preparing to copy...?25lCopying from container - 0B?25hSuccessfully copied 2.56kB to /tmp/tmp7n6pc6u1/app.json
Preparing to copy...?25lCopying from container - 0B?25hSuccessfully copied 2.05kB to /tmp/tmp7n6pc6u1/pkg.json
[2023-11-15 19:26:07,741] [INFO] (runner) - --> Verifying if "nvidia-ctk" is installed...

[2023-11-15 19:26:07,994] [INFO] (common) - Launching container (c634a4b0db9a) using image 'mednist_app-x64-workstation-dgpu-linux-amd64:1.0'...
    container name:      quizzical_hopper
    host name:           mingq-dt
    network:             host
    user:                1000:1000
    ulimits:             memlock=-1:-1, stack=67108864:67108864
    cap_add:             CAP_SYS_PTRACE
    ipc mode:            host
    shared memory size:  67108864
    devices:             
2023-11-16 03:26:08 [INFO] Launching application python3 /opt/holoscan/app/mednist_classifier_monaideploy.py ...

[info] [app_driver.cpp:1025] Launching the driver/health checking service

[info] [gxf_executor.cpp:210] Creating context

[info] [server.cpp:73] Health checking server listening on 0.0.0.0:8777

[info] [gxf_executor.cpp:1595] Loading extensions from configs...

[info] [gxf_executor.cpp:1741] Activating Graph...

[info] [gxf_executor.cpp:1771] Running Graph...

[info] [gxf_executor.cpp:1773] Waiting for completion...

[info] [gxf_executor.cpp:1774] Graph execution waiting. Fragment: 

[info] [greedy_scheduler.cpp:190] Scheduling 3 entities

[info] [greedy_scheduler.cpp:369] Scheduler stopped: Some entities are waiting for execution, but there are no periodic or async entities to get out of the deadlock.

[info] [greedy_scheduler.cpp:398] Scheduler finished.

[info] [gxf_executor.cpp:1783] Graph execution deactivating. Fragment: 

[info] [gxf_executor.cpp:1784] Deactivating Graph...

[info] [gxf_executor.cpp:1787] Graph execution finished. Fragment: 

[info] [gxf_executor.cpp:229] Destroying context

/home/holoscan/.local/lib/python3.8/site-packages/monai/data/meta_tensor.py:116: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)

  return torch.as_tensor(x, *args, **_kwargs).as_subclass(cls)

/home/holoscan/.local/lib/python3.8/site-packages/pydicom/valuerep.py:443: UserWarning: Invalid value for VR UI: 'xyz'. Please see <https://dicom.nema.org/medical/dicom/current/output/html/part05.html#table_6.2-1> for allowed values for each VR.

  warnings.warn(msg)

AbdomenCT

[2023-11-15 19:26:14,982] [INFO] (common) - Container 'quizzical_hopper'(c634a4b0db9a) exited.
!cat $HOLOSCAN_OUTPUT_PATH/output.json
"AbdomenCT"

Note: Please execute the following script once the exercise is done.

# Remove data files which is in the temporary folder
if directory is None:
    shutil.rmtree(root_dir)