iTranslated by AI

The content below is an AI-generated translation. This is an experimental feature, and may contain errors. View original article
🐳

Using NVIDIA NGC with Docker on Rescale: NVIDIA Modulus Getting Started Example

に公開

About this article

This article demonstrates how to use Docker installed on Rescale compute nodes to pull and run containers from NVIDIA NGC. As a practical example, we will perform "a quick installation check" from the NVIDIA Modulus Getting Started guide.

Preparation: Enabling Docker

Docker is listed in the software catalog during Rescale job setup, but it is grayed out by default.

Click on the grayed-out Docker icon to send a Software Request.

It might be smoother to consult with Rescale personnel (sales or technical staff) or submit a support request to Rescale Support.

Job Setup

Execute the job with the following settings:

  • Inputs
    • nvidia_modulus_getting_started.py
  • Software Settings
    • Docker latest (Rescale linux8 GPU)
    • Command
docker run --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 --runtime nvidia -v ${PWD}:/workspace --rm nvcr.io/nvidia/modulus/modulus:24.12 /bin/bash -c "python nvidia_modulus_getting_started.py"
  • Hardware Settings
    • Mallorn 12 cores (NVIDIA A100, 1 GPU)
    • Walltime: 1 hour

The content of nvidia_modulus_getting_started.py is as follows:

import torch
from modulus.models.mlp.fully_connected import FullyConnected
model = FullyConnected(in_features=32, out_features=64)
input = torch.randn(128, 32)
output = model(input)
print(output.shape)

This script is based on "A quick installation check can also be done by running the following" from the NVIDIA Modulus Getting Started guide.

Execution Result

torch.Size([128, 64]) is output in process_output.log. This indicates that the quick installation check has been successfully completed.

Discussion