📋

Podman Compose (Rootless) でGPUを利用したコンテナを作る

2025/03/16に公開

概要

podman compose 向けでGPU支援を利用したコンテナを作成するときに少し苦戦したので、そのメモです。

Jellyfin や Immich で動作確認しましたので、後の方で参考としてcompose.ymlを掲載しています。

docker compose 向けでは下記のような設定を追加せよとなっていますが、下記環境ではこれではGPUにアクセスできません。
ポイントとしては、nvidiaのCDIを生成してそれを利用することとなります。

compose.yml
# 動作しない例
services:
    app:
        deploy:
            resources:
                reservations:
                devices:
                    - driver: nvidia
                    count: all
                    capabilities:
                        - gpu
compose.yml
services:
    app:
        runtime: nvidia
        gpus: all

環境

項目 内容 Versionなど
ホストOS Ubuntu 24.04 Server
コンテナマネージャ Podman 4.9.3
podman compose のバックエンド podman-compose 1.0.6
グラフィックボード GTX 1650 Turing(第六世代)

ホストの下準備(nvidiaドライバ周りのインストール)

CUDA Toolkit

Ubuntuのドキュメントで推奨されている方法

下記なようにする方法がUbuntu公式の方法として用意されているが、OSのクリーンインストールから行っても上手くいかず。

sudo ubuntu-drivers install --gpgpu

参考: Ubuntu Server documentation - Install NVIDIA drivers

nvidia公式の方法

nvidiaの公式ページの手順を実施する。
(2025/03/16時点, CUDA Toolkit 12.8 Update 1)

参考: CUDA Toolkit - Downloads (Ubuntu 24.04, x86_64, network)

CUDA Toolkit のインストール

# リポジトリの追加
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb

# cuda-toolkitのインストール
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-8

Driver のインストール

sudo apt-get install -y cuda-drivers
# openソース版
# sudo apt-get install -y nvidia-open

動作確認 - CUDA Toolkit

ここまで実施すると動作確認が可能となる。

nvidia-smi
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.124.06             Driver Version: 570.124.06     CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1650        Off |   00000000:01:00.0 Off |                  N/A |
| 27%   27C    P8              8W /   90W |       1MiB /   4096MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Jan__6_16:45:21_PST_2023
Cuda compilation tools, release 12.0, V12.0.140
Build cuda_12.0.r12.0/compiler.32267302_0

Container Toolkit

インストール - Container Toolkit

参考: NVIDIA Docs Hub - NVIDIA Container Toolkit

以下、ドキュメント通りの方法。(2025/03/16時点)

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit

GPUのCDI生成

Dockerであればこの時点でコンテナ内でGPUを利用できるようになるが、Podmanではもうひと手間必要となる。

Podmanでは、GPUにアクセスするのにCDI(Container Device Interface)を定義しておく必要があるが、自動では生成されない。よって、ツールを利用してこれを生成する。

Container Toolkit をインストールした際に、NVIDIA Container Toolkit CLI というツールもインストールされているのでこれを利用する。

参考: NVIDIA Container Toolkit - Container Device Interface

sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

# 結果の確認
nvidia-ctk cdi list

以下のような出力が得られる。
(xxxxxxxxxxxxxxの部分は、固有のIDが入る。)

INFO[0000] Found 3 CDI devices                          
nvidia.com/gpu=0
nvidia.com/gpu=GPU-xxxxxxxxxxxxxx
nvidia.com/gpu=all

動作確認 - Container Toolkit

以下で、ホストでnvidia-smiを実行するのと同様の結果が得られることを確認。

テスト用コンテナを生成

# 全GPU割り当て
podman run --rm --device nvidia.com/gpu=all --security-opt=label=disable ubuntu nvidia-smi

# 特定GPU / MIGデバイス 割り当て
podman run --rm \
    --device nvidia.com/gpu=0 \
    --device nvidia.com/gpu=1:0 \
    --security-opt=label=disable \
    ubuntu nvidia-smi
compose.yml
services:
  test-gpu-container:
    image: ubuntu
    devices:
      - "nvidia.com/gpu=all"
    security_opt:
      - "label=disable"
    command: nvidia-smi
# composeの動作確認
podman compose up

実際には下記のようなpodman createコマンドが投入される。

podman create --name=test-gpu-container_1 --security-opt label=disable --label io.podman.compose.config-hash=3ffbd6c2a9f9f6a475faafbfdcfdc5015398514743f5493099c3053342e35d5e --label io.podman.compose.project=test-gpu --label io.podman.compose.version=1.0.6 --label PODMAN_SYSTEMD_UNIT=podman-compose@test-gpu.service --label com.docker.compose.project=test-gpu --label com.docker.compose.project.working_dir=/home/username/test-gpu --label com.docker.compose.project.config_files=compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=test-gpu-container --device nvidia.com/gpu=all --net test-gpu_default --network-alias test-gpu-container ubuntu nvidia-smi

因みに、security_optの指定は下記の理由からだそう。SELinuxを利用していなければ必要ないはず。

--security-opt label=disable オプションは、ホスト Podman での SELinux の分離を無効にします。SELinux では、コンテナー化されたプロセスは、コンテナー内で実行するために必要なすべてのファイルシステムをマウントすることはできません。
引用元: Red Hat Enterprise Linux

稼働中のコンテナに対して確認する方法

podman exec -it container_name nvidia-smi

compose.yml の例

Jellyfin

compose.yml
services:
  jellyfin:
    image: docker.io/jellyfin/jellyfin:latest
    container_name: jellyfin
    volumes:
      - ./container/config:/config
      - ./container/cache:/cache
      - type: bind
        source: /mnt/nas/movie
        target: /media
        read_only: true
      # Optional - extra fonts to be used during transcoding with subtitle burn-in
      - type: bind
        source: ./container/fonts
        target: /usr/local/share/fonts/custom
        read_only: true
    restart: always
    ports:
      - 8096:8096
    environment:
      - TZ=Asia/Tokyo
      # Optional - alternative address used for autodiscovery
      # - JELLYFIN_PublishedServerUrl=http://example.com
    devices:
      - "nvidia.com/gpu=all"
    security_opt:
      - label=disable

Immich

hwaccel.ml.ymlやhwaccel.transcoding.ymlは使わない。

compose.yml
name: immich

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    volumes:
      # Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
      - ${EXTERNAL_LIBLARY_LOCATION}:/media/external
    env_file:
      - .env
    ports:
      - '2283:2283'
    depends_on:
      - redis
      - database
    restart: always
    healthcheck:
      disable: false
    devices:
      - "nvidia.com/gpu=all"
    security_opt:
      - label=disable

  immich-machine-learning:
    container_name: immich_machine_learning
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-cuda
    volumes:
      # - model-cache:/cache
      - ./model-cache:/cache
    env_file:
      - .env
    restart: always
    healthcheck:
      disable: false
    devices:
      - "nvidia.com/gpu=all"
    security_opt:
      - label=disable

  redis:
    container_name: immich_redis
    image: docker.io/redis:6.2-alpine@sha256:148bb5411c184abd288d9aaed139c98123eeb8824c5d3fce03cf721db58066d8
    healthcheck:
      test: redis-cli ping || exit 1
    restart: always

  database:
    container_name: immich_postgres
    image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:739cdd626151ff1f796dc95a6591b55a714f341c737e27f045019ceabf8e8c52
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      POSTGRES_INITDB_ARGS: '--data-checksums'
    volumes:
      # Do not edit the next line. If you want to change the database storage location on your system, edit the value of DB_DATA_LOCATION in the .env file
      - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
    healthcheck:
      test: >-
        pg_isready --dbname="$${POSTGRES_DB}" --username="$${POSTGRES_USER}" || exit 1;
        Chksum="$$(psql --dbname="$${POSTGRES_DB}" --username="$${POSTGRES_USER}" --tuples-only --no-align
        --command='SELECT COALESCE(SUM(checksum_failures), 0) FROM pg_stat_database')";
        echo "checksum failure count is $$Chksum";
        [ "$$Chksum" = '0' ] || exit 1
      interval: 5m
      start_interval: 30s
      start_period: 5m
    command: >-
      postgres
      -c shared_preload_libraries=vectors.so
      -c 'search_path="$$user", public, vectors'
      -c logging_collector=on
      -c max_wal_size=2GB
      -c shared_buffers=512MB
      -c wal_compression=on
    restart: always

# volumes:
#   model-cache:

参考

セットアップ手順

ハードウェアエンコーディングの対応表

アプリケーションの関連ドキュメント

nvidia エンコード対応表

GPU H.265 (HEVC) 4:4:4 (Max Color) H.265 (HEVC) 4:4:4 (Max Res) H.265 (HEVC) 4:2:2 (Max Color) H.265 (HEVC) 4:2:2 (Max Res) H.265 (HEVC) 4:2:0 (Max Color) H.265 (HEVC) 4:2:0 (Max Res) H.264 (AVCHD) 4:2:0 (Max Color) H.264 (AVCHD) 4:2:0 (Max Res) VP9 4:2:0 (Max Color) VP9 4:2:0 (Max Res) MPEG-2 VC-1 AV1 (Max Color) AV1 (Max Res)
Kepler N/A N/A N/A N/A 8-bit 4096 8-bit 4096 N/A N/A 4K 4K N/A N/A
Maxwell (1st Gen) GM107 N/A N/A N/A N/A 8-bit 4096 8-bit 4096 N/A N/A 4K 4K N/A N/A
Maxwell (2nd Gen) GM20x N/A N/A N/A N/A 8-bit 4096 8-bit 4096 N/A N/A 4K 4K N/A N/A
Pascal 10-bit 4096 N/A N/A 10-bit 4096 8-bit 4096 10-bit 4096 4K 4K N/A N/A
Volta 10-bit 4096 N/A N/A 10-bit 4096 8-bit 4096 10-bit 4096 4K 4K N/A N/A
Turing 10-bit 8192 10-bit 8192 10-bit 8192 8-bit 8192 10-bit 8192 8K 8K N/A N/A
Ampere 10-bit 8192 10-bit 8192 10-bit 8192 8-bit 8192 10-bit 8192 8K 8K 10-bit 8192
Ada 10-bit 8192 10-bit 8192 10-bit 8192 8-bit 8192 10-bit 8192 8K 8K 12-bit 8192

CDIの中身

CDI nvidia.yaml
/etc/cdi/nvidia.yaml
---
cdiVersion: 0.5.0
containerEdits:
  deviceNodes:
  - path: /dev/nvidia-uvm
  - path: /dev/nvidia-uvm-tools
  - path: /dev/nvidiactl
  env:
  - NVIDIA_VISIBLE_DEVICES=void
  hooks:
  - args:
    - nvidia-cdi-hook
    - create-symlinks
    - --link
    - ../libnvidia-allocator.so.1::/usr/lib/x86_64-linux-gnu/gbm/nvidia-drm_gbm.so
    - --link
    - libglxserver_nvidia.so.570.124.06::/usr/lib/x86_64-linux-gnu/nvidia/xorg/libglxserver_nvidia.so
    hookName: createContainer
    path: /usr/bin/nvidia-cdi-hook
  - args:
    - nvidia-cdi-hook
    - create-symlinks
    - --link
    - libcuda.so.1::/usr/lib/x86_64-linux-gnu/libcuda.so
    - --link
    - libnvidia-opticalflow.so.1::/usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so
    - --link
    - libGLX_nvidia.so.570.124.06::/usr/lib/x86_64-linux-gnu/libGLX_indirect.so.0
    hookName: createContainer
    path: /usr/bin/nvidia-cdi-hook
  - args:
    - nvidia-cdi-hook
    - update-ldcache
    - --folder
    - /usr/lib/x86_64-linux-gnu
    - --folder
    - /usr/lib/x86_64-linux-gnu/vdpau
    hookName: createContainer
    path: /usr/bin/nvidia-cdi-hook
  mounts:
  - containerPath: /usr/bin/nvidia-cuda-mps-control
    hostPath: /usr/bin/nvidia-cuda-mps-control
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/bin/nvidia-cuda-mps-server
    hostPath: /usr/bin/nvidia-cuda-mps-server
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/bin/nvidia-debugdump
    hostPath: /usr/bin/nvidia-debugdump
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/bin/nvidia-persistenced
    hostPath: /usr/bin/nvidia-persistenced
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/bin/nvidia-smi
    hostPath: /usr/bin/nvidia-smi
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /etc/vulkan/icd.d/nvidia_icd.json
    hostPath: /usr/share/vulkan/icd.d/nvidia_icd.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /etc/vulkan/implicit_layer.d/nvidia_layers.json
    hostPath: /usr/share/vulkan/implicit_layer.d/nvidia_layers.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libcuda.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libcuda.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libcudadebugger.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libcudadebugger.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvcuvid.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvcuvid.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-egl-gbm.so.1.1.2
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-egl-gbm.so.1.1.2
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-egl-wayland.so.1.1.18
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-egl-wayland.so.1.1.18
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-fbc.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-fbc.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-glcore.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-glcore.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-glsi.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-glsi.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-gpucomp.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-gpucomp.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-ngx.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-ngx.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-nvvm.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-nvvm.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-pkcs11-openssl3.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-pkcs11-openssl3.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-rtcore.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-rtcore.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-sandboxutils.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-sandboxutils.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-tls.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-tls.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvidia-vksc-core.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvidia-vksc-core.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/libnvoptix.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/libnvoptix.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/share/nvidia/nvoptix.bin
    hostPath: /usr/share/nvidia/nvoptix.bin
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /lib/firmware/nvidia/570.124.06/gsp_ga10x.bin
    hostPath: /lib/firmware/nvidia/570.124.06/gsp_ga10x.bin
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /lib/firmware/nvidia/570.124.06/gsp_tu10x.bin
    hostPath: /lib/firmware/nvidia/570.124.06/gsp_tu10x.bin
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/vdpau/libvdpau_nvidia.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/vdpau/libvdpau_nvidia.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/share/X11/xorg.conf.d/10-nvidia.conf
    hostPath: /usr/share/X11/xorg.conf.d/10-nvidia.conf
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json
    hostPath: /usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json
    hostPath: /usr/share/egl/egl_external_platform.d/15_nvidia_gbm.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/share/glvnd/egl_vendor.d/10_nvidia.json
    hostPath: /usr/share/glvnd/egl_vendor.d/10_nvidia.json
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/nvidia/xorg/libglxserver_nvidia.so.570.124.06
    hostPath: /usr/lib/x86_64-linux-gnu/nvidia/xorg/libglxserver_nvidia.so.570.124.06
    options:
    - ro
    - nosuid
    - nodev
    - bind
  - containerPath: /usr/lib/x86_64-linux-gnu/nvidia/xorg/nvidia_drv.so
    hostPath: /usr/lib/x86_64-linux-gnu/nvidia/xorg/nvidia_drv.so
    options:
    - ro
    - nosuid
    - nodev
    - bind
devices:
- containerEdits:
    deviceNodes:
    - path: /dev/nvidia0
  name: "0"
- containerEdits:
    deviceNodes:
    - path: /dev/nvidia0
  name: GPU-xxxxxx
- containerEdits:
    deviceNodes:
    - path: /dev/nvidia0
  name: all
kind: nvidia.com/gpu
GitHubで編集を提案

Discussion