👀

How to Install CUDA 11.5 on Ubuntu 20.04 with RTX 3090

2023/05/24に公開

Introduction

Our objective this time is to enable GPU utilization in Deep Learning. Hence, we will verify up to the point where both PyTorch and TensorFlow recognize the GPU.

1. Installation of CUDA Toolkit

We install CUDA 11.5:

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.5.0/local_installers/cuda-repo-ubuntu2004-11-5-local_11.5.0-495.29.05-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2004-11-5-local_11.5.0-495.29.05-1_amd64.deb
sudo apt-key add /var/cuda-repo-ubuntu2004-11-5-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda

2. Setting the Path

Check the installed CUDA version:

ls /usr/local/ |grep cuda
cuda
cuda-11
cuda-11.5
cuda-12
cuda-12.1

We want to use cuda-11.5 this time, so we add the following code to ~/.bashrc:

vi ~/.bashrc
export PATH="/usr/local/cuda-11.5/bin:$PATH"  # Add this line
export LD_LIBRARY_PATH="/usr/local/cuda-11.5/lib64:$LD_LIBRARY_PATH"  # Add this line

source ~/.bashrc

This should also allow nvcc -V to work:

nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

3. Install TensorRT

Choose the version from the official site: https://developer.nvidia.com/nvidia-tensorrt-8x-download
Image

Right-click to copy the link, and you can install it with the following command:

wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.1/local_repos/nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-11.8_1.0-1_amd64.deb
sudo dpkg -i nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-11.8_1.0-1_amd64.deb
sudo cp /var/nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-11.8/nv-tensorrt-local-D7BB1B18-keyring.gpg /usr/share/keyrings/
sudo apt update
sudo apt install -y tensorrt

4. Checking GPU Recognition in PyTorch and TensorFlow

PyTorch

Python 3.10.7 [GCC 9.4.0] on linux
Type "help

", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.cuda.is_available())
True

TensorFlow

Python 3.10.7 [GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.test.is_gpu_available()
(omitted)
True

Both PyTorch and TensorFlow have recognized the GPU!

As a side note, please ensure that your specific hardware and software setup is compatible with these instructions. CUDA and other GPU software often have specific requirements and restrictions. It's always a good idea to double-check against the official documentation of each software component.

Discussion