Open7
Windows11 + WSL2 + Docker Desktop環境を作る
とりあえず管理者権限のコマンドプロンプトでWSLをインストール
wsl --install
加えてWindowsの機能の有効化または無効化から、Linux用Windowsサブシステムを有効化して再起動
WSL用のNVIDIA drivers for WSLをダウンロード
(CUDA(Compute Unified Device Architecture): GPU用のプログラム言語、およびそのプラットフォーム)
自分の環境に合わせる
再起動をかける
ついでにWSL2のアプデをする
wsl.exe --update
ここからこの記事に則る
古いGPGキー(GNU Privacy Guard: インストールしようとしているソフトウェアが本物か検証するためのキー)を削除する
sudo apt-key del 7fa2af80
Linux x86 CUDA toolkitのインストール
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu2004-11-7-local_11.7.0-515.43.04-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2004-11-7-local_11.7.0-515.43.04-1_amd64.deb
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-keyring_1.0-1_all.deb
sudo dpkg -i cuda-keyring_1.0-1_all.deb
sudo apt-get update
sudo apt-get -y install cuda-toolkit-11-7
dpkg: .debのファイルをインストールする
sudo dpkg -i cuda-keyring_1.0-1_all.debでブルスク落ちした
再起動後dpkg -l
でcuda-keyringが入っていることを確認したので続行
インストールできてるか確認
/usr/local/cuda/bin/nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Tue_May__3_18:49:52_PDT_2022
Cuda compilation tools, release 11.7, V11.7.64
Build cuda_11.7.r11.7/compiler.31294372_0
Docker-Desktopを入れる
PowerShellでこのようになればOK
wsl -l -v
NAME STATE VERSION
* Ubuntu Running 2
docker-desktop-data Running 2
docker-desktop Running 2
動作確認
docker pull tensorflow/tensorflow:latest-gpu
docker run --gpus all -it -p 8888:8888 tensorflow/tensorflow:latest-gpu
nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.73.08 Driver Version: 512.96 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:09:00.0 On | N/A |
| 41% 39C P8 12W / 125W | 683MiB / 6144MiB | 4% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
root@4a32b0ad9879:/# python3
Python 3.8.10 (default, Mar 15 2022, 12:22:08)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow
>>> tensorflow.version
<module 'tensorflow._api.v2.version' from '/usr/local/lib/python3.8/dist-packages/tensorflow/_api/v2/version/__init__.py'>
>>> from tensorflow.python.client import device_lib
>>> device_lib.list_local_devices()
2022-06-09 10:30:16.773862: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-09 10:30:16.939314: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:09:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-09 10:30:16.949436: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:09:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-09 10:30:16.949754: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:09:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-09 10:30:17.632875: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:09:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-09 10:30:17.633230: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:09:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-09 10:30:17.633289: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Could not identify NUMA node of platform GPU id 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-06-09 10:30:17.633641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:09:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-09 10:30:17.633723: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /device:GPU:0 with 3954 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1660 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 3105584825495753608
xla_global_id: -1
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 4146659328
locality {
bus_id: 1
links {
}
}
incarnation: 2705822217186746394
physical_device_desc: "device: 0, name: NVIDIA GeForce GTX 1660 SUPER, pci bus id: 0000:09:00.0, compute capability: 7.5"
xla_global_id: 416903419
]
Docker Desktopからもきちんと見えている