Open3

onnxruntime-gpu v1.22.0 + TensorRT 10.9.0 + CUDA12.8 + onnx-tenosrrt-oss parser によるビルドの試行

PINTOPINTO
nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2025 NVIDIA Corporation
Built on Fri_Feb_21_20:23:50_PST_2025
Cuda compilation tools, release 12.8, V12.8.93
Build cuda_12.8.r12.8/compiler.35583870_0

# とても古いバージョンの tensorrt 10.2.0 が残ってしまっている
dpkg -l | grep tensorrt
ii  nv-tensorrt-local-repo-ubuntu2204-10.2.0-cuda-12.5 1.0-1                                                      amd64        nv-tensorrt-local repository configuration files
ii  tensorrt                                           10.9.0.34-1+cuda12.8                                       amd64        Meta package for TensorRT
ii  tensorrt-dev                                       10.9.0.34-1+cuda12.8                                       amd64        Meta package for TensorRT development libraries
ii  tensorrt-libs                                      10.9.0.34-1+cuda12.8                                       amd64        Meta package for TensorRT runtime libraries

python -c "import tensorrt as trt; print(trt.__version__)"
10.2.0.post1

ls -l /usr/lib/x86_64-linux-gnu/libnvinfer.so.*
lrwxrwxrwx 1 root root        20  31 17:30 /usr/lib/x86_64-linux-gnu/libnvinfer.so.10 -> libnvinfer.so.10.9.0
-rw-r--r-- 1 root root 672076800  31 17:30 /usr/lib/x86_64-linux-gnu/libnvinfer.so.10.9.0

# とても古いバージョンの tensorrt 10.2.0 を削除
sudo rm -rf /home/${USER}/.local/lib/python3.10/site-packages/tensorrt/
sudo dpkg --purge nv-tensorrt-local-repo-ubuntu2204-10.2.0-cuda-12.5

python -c "import tensorrt as trt; print(trt.__version__)"
10.9.0.34

APT パッケージでインストールされた 10.9.0 の正しいパスは通常以下にある:
ライブラリ:/usr/lib/x86_64-linux-gnu/
ヘッダ:/usr/include/x86_64-linux-gnu/

# cuDNN のインストール先の確認
sudo find /usr -name "libcudnn.so*" 2>/dev/null

/usr/lib/x86_64-linux-gnu/libcudnn.so.9
/usr/lib/x86_64-linux-gnu/libcudnn.so
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0

# TensorRT のインストール先の確認
sudo find /usr -name "libnvinfer.so*" 2>/dev/null

/usr/lib/x86_64-linux-gnu/libnvinfer.so.10.9.0
/usr/lib/x86_64-linux-gnu/libnvinfer.so
/usr/lib/x86_64-linux-gnu/libnvinfer.so.10
PINTOPINTO
######## onnxruntime-gpu v1.21.0
git clone -b v1.21.0 https://github.com/microsoft/onnxruntime
cd onnxruntime
######## onnxruntime-gpu v1.21.0

or

######## onnxruntime-gpu v1.22.0
git clone  https://github.com/microsoft/onnxruntime
cd onnxruntime
git checkout 0d26928b57ea55d6ece7e98be834dbb0d8b6c0a0
######## onnxruntime-gpu v1.22.0

cmake/deps.txt

- onnx;https://github.com/onnx/onnx/archive/refs/tags/v1.17.0.zip;13a60ac5217c104139ce0fd024f48628e7bcf5bc
+ onnx;https://github.com/onnx/onnx/archive/f22a2ad78c9b8f3bd2bb402bfce2b0079570ecb6.zip;324a781c31e30306e30baff0ed7fe347b10f8e3c

curl -L -o cmake/patches/onnx/onnx.patch https://github.com/microsoft/onnxruntime/raw/7b2733a526c12b5ef4475edd47fd9997ebc2b2c6/cmake/patches/onnx/onnx.patch

pip install cmake==3.31.6

./build.sh \
--config Release \
--enable_pybind \
--build_wheel \
--parallel $(nproc) \
--cmake_generator Ninja \
--cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=86 \
--cmake_extra_defines 'CMAKE_CUDA_ARCHITECTURES=native' \
--cuda_home /usr/local/cuda \
--cudnn_home /usr/lib/x86_64-linux-gnu \
--tensorrt_home /usr/lib/x86_64-linux-gnu \
--use_cuda \
--use_tensorrt \
--use_tensorrt_oss_parser \
--skip_submodule_sync \
--skip_tests