🤪

onnxをonnx_tensorrt.backendを使用してTensorRTライク環境で手軽に推論する

2021/10/20に公開

1. 環境

  1. Docker
  2. NVIDIA GPU
  3. NVIDIA Driver Version: 470.74+

2. 手順

2-1. 環境起動

$ docker pull pinto0309/cuda114-tensorrt82:latest

# GUIやUSBカメラが不要な場合
$ docker run -it --rm --gpus all \
-v `pwd`:/workspace \
pinto0309/cuda114-tensorrt82:latest

# GUIやUSBカメラが必要な場合
xhost +local: && \
docker run --gpus all -it --rm \
-v `pwd`:/home/user/workdir \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
pinto0309/cuda114-tensorrt82:latest

2-2. 推論テストコード作成

demo_onnx_tensorrt_backend.py

import onnx
import onnx_tensorrt.backend as be
import numpy as np
np.random.seed(0)
from pprint import pprint

model = onnx.load('dpt_hybrid_480x640.onnx')
engine = be.prepare(model, device='CUDA:0')
input = np.random.random((1,3,480,640)).astype(np.float32)
output = engine.run(input)[0]

print(output.shape)
pprint(output)

2-3. 推論テスト

Dockerコンテナ起動時のカレントパスに dpt_hybrid_480x640.onnx が保存されている前提。

$ python3 demo_onnx_tensorrt_backend.py 
[10/20/2021-09:40:31] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[10/20/2021-09:40:31] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
(1, 480, 640)
array([[[  0.      ,   0.      ,   0.      , ...,  97.46102 ,
          92.97532 ,  69.64345 ],
        [  0.      ,   0.      ,   0.      , ..., 100.24263 ,
          99.05285 ,  96.60125 ],
        [  0.      ,   0.      ,   0.      , ..., 100.27604 ,
         100.36055 ,  96.658455],
        ...,
        [412.23093 , 417.0685  , 413.90054 , ..., 719.3952  ,
         717.5759  , 717.3224  ],
        [424.08307 , 426.16458 , 418.8609  , ..., 714.8736  ,
         715.30475 , 706.9079  ],
        [428.4545  , 432.19156 , 425.32715 , ..., 713.491   ,
         706.20135 , 693.8857  ]]], dtype=float32)

3. Dockerコンテナに導入済みの環境

  1. Ubuntu 20.04
  2. CUDA 11.4.2
  3. cuDNN 8.2.4
  4. tensorrt 8.2
  5. xhost
  6. onnx
  7. onnxruntime-gpu
  8. onnx-simplifier
  9. onnxconverter-common
  10. gdown
  11. matplotlib
  12. pycuda
  13. uff-converter-tf
  14. graphsurgeon-tf
  15. python3-libnvinfer-dev
  16. onnx-graphsurgeon
  17. trtexec
  18. onnx-tensorrt (Python API)
  19. opencv-python
  • pinto0309/cuda114-tensorrt82

https://hub.docker.com/repository/docker/pinto0309/cuda114-tensorrt82

4. Dockerfile

FROM nvidia/cuda:11.4.2-devel-ubuntu20.04

ENV DEBIAN_FRONTEND=noninteractive
ARG OSVER=ubuntu2004
ARG CPVER=cp38
ARG CUDAVER=11.4
ARG TENSORRTVER=cuda${CUDAVER}-trt8.2.0.6-ea-20210922
ARG WKDIR=/workspace

RUN apt-get update && apt-get install -y \
        automake autoconf libpng-dev nano python3-pip \
        curl zip unzip libtool swig zlib1g-dev pkg-config \
        python3-mock libpython3-dev libpython3-all-dev \
        g++ gcc cmake make pciutils cpio gosu wget \
        libgtk-3-dev libxtst-dev sudo apt-transport-https \
        build-essential gnupg git xz-utils vim \
        libva-drm2 libva-x11-2 vainfo libva-wayland2 libva-glx2 \
        libva-dev libdrm-dev xorg xorg-dev protobuf-compiler \
        openbox libx11-dev libgl1-mesa-glx libgl1-mesa-dev \
        libtbb2 libtbb-dev libopenblas-dev libopenmpi-dev \
    && sed -i 's/# set linenumbers/set linenumbers/g' /etc/nanorc \
    && ln -s /usr/bin/python3 /usr/bin/python \
    && apt clean \
    && rm -rf /var/lib/apt/lists/*

RUN pip3 install --upgrade pip \
    && pip install onnx \
    && pip install onnx-simplifier \
    && pip install onnxconverter-common \
    && pip install gdown \
    && pip install PyYAML \
    && pip install matplotlib \
    && pip install pycuda \
    && pip uninstall -y onnxruntime onnxruntime-gpu \
    && pip install onnxruntime-gpu \
    && pip install opencv-python \
    && ldconfig \
    && pip cache purge \
    && apt clean \
    && rm -rf /var/lib/apt/lists/*

COPY nv-tensorrt-repo-${OSVER}-${TENSORRTVER}_1-1_amd64.deb .

RUN dpkg -i nv-tensorrt-repo-${OSVER}-${TENSORRTVER}_1-1_amd64.deb \
    && apt-key add /var/nv-tensorrt-repo-${OSVER}-${TENSORRTVER}/7fa2af80.pub \
    && apt-get update \
    && apt-get install -y \
        tensorrt uff-converter-tf graphsurgeon-tf \
        python3-libnvinfer-dev onnx-graphsurgeon \
    && rm nv-tensorrt-repo-${OSVER}-${TENSORRTVER}_1-1_amd64.deb \
    && cd /usr/src/tensorrt/samples/trtexec \
    && make \
    && apt clean \
    && rm -rf /var/lib/apt/lists/*

RUN git clone --recursive https://github.com/onnx/onnx-tensorrt \
    && cd onnx-tensorrt \
    && git checkout 1f041ce6d7b30e9bce0aacb2243309edffc8fb3c \
    && mkdir build && cd build \
    && cmake .. -DTENSORRT_ROOT=/usr/src/tensorrt \
    && make -j$(nproc) && make install

ENV USERNAME=user
RUN echo "root:root" | chpasswd \
    && adduser --disabled-password --gecos "" "${USERNAME}" \
    && echo "${USERNAME}:${USERNAME}" | chpasswd \
    && echo "%${USERNAME}    ALL=(ALL)   NOPASSWD:    ALL" >> /etc/sudoers.d/${USERNAME} \
    && chmod 0440 /etc/sudoers.d/${USERNAME}
USER ${USERNAME}
WORKDIR ${WKDIR}
RUN sudo chown -R ${USERNAME}:${USERNAME} ${WKDIR} \
    && echo "export PATH=$PATH:/usr/src/tensorrt/bin:/onnx-tensorrt/build" >> $HOME/.bashrc \
    && echo "cd /onnx-tensorrt" >> $HOME/.bashrc \
    && echo "sudo python3 setup.py install" >> $HOME/.bashrc \
    && echo "cd ${WKDIR}" >> $HOME/.bashrc

5. ONNXをあらかじめTRT形式へコンパイルするコマンド

$ onnx2trt xxxx.onnx -o xxxx.trt -b 1 -d 16 -v

$ trtexec \
--int8 \
--calib=xxxx_calib_cache \
--onnx=xxxx.onnx \
--saveEngine=xxxx_int8.trt

Discussion