Open11

DN_yolo関連

kotaprojkotaproj

Colaboratory使い方

Google Driveにマウント

# Google Driveにマウント
from google.colab import drive
drive.mount('/content/gdrive')
# ディレクトリに移動
%cd "/content/gdrive/MyDrive/Colab Notebooks/study_deepsort"
kotaprojkotaproj

windows10 - deepsort環境を構築

環境メモ

  • GPU
    • NVIDIA GeForce GTX 1070

必要なモジュールメモ

> pip list
Package           Version
----------------- -----------
easydict          1.9
numpy             1.20.2
opencv-python     4.5.1.48
Pillow            8.2.0
pip               20.2.3
pyaml             20.4.0
PyYAML            5.4.1
scipy             1.6.2
setuptools        49.2.1
torch             1.8.1+cu102
torchaudio        0.8.1
torchvision       0.9.1+cu102
typing-extensions 3.7.4.3

作業ログ ->ok

(env_deepsort) PS > Get-History

  Id CommandLine
  -- -----------
   2 python3 -m venv env_deepsort
   3 .\env_deepsort\Scripts\activate
   4 pip3 install torch==1.8.1+cu102 torchvision==0.9.1+cu102 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html   
   5 git clone https://github.com/ZQPei/deep_sort_pytorch.git
   7 cd .\deep_sort_pytorch\
   9 pip install opencv-python
  11 pip install scipy
  13 pip install pyaml
  15 pip install easydict
  16 python .\yolov3_deepsort.py ../video.mp4 --save_path ../output_study_video

(env_deepsort) PS E:\md\py_study_deepsort\deep_sort_pytorch> 

バグメモ

webカメラ経由で、キャプチャすると動画が保存されない。
heightとwidthの指定が逆なっていたため修正

修正前:

            self.im_width = frame.shape[0]
            self.im_height = frame.shape[1]

修正後

            self.im_height = frame.shape[0]
            self.im_width = frame.shape[1]

=>opencvはheight, widthの順。

kotaprojkotaproj

yolov5 - torch.hub()の使い方

↑の環境にプラスして、以下を実施

  33 pip install pandas
  34 pip install requests
  35 pip install tqdm
  36 pip install matplotlib
  38 pip install seaborn
  39 pip install tensorboard>=2.4.1

コードは以下。

import cv2
import torch
from PIL import Image

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Images
for f in ['zidane.jpg', 'bus.jpg']:  # download 2 images
    print(f'Downloading {f}...')
    torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/' + f, f)
img1 = Image.open('zidane.jpg')  # PIL image
img2 = cv2.imread('bus.jpg')[:, :, ::-1]  # OpenCV image (BGR to RGB)
imgs = [img1, img2]  # batch of images

# Inference
results = model(imgs, size=640)  # includes NMS

# Results
results.print()  
# results.save()  # or .show()
results.show()

# Data
print(results.xyxy[0])  # print img1 predictions (pixels)
#                   x1           y1           x2           y2   confidence        class
# tensor([[7.50637e+02, 4.37279e+01, 1.15887e+03, 7.08682e+02, 8.18137e-01, 0.00000e+00],
#         [9.33597e+01, 2.07387e+02, 1.04737e+03, 7.10224e+02, 5.78011e-01, 0.00000e+00],
#         [4.24503e+02, 4.29092e+02, 5.16300e+02, 7.16425e+02, 5.68713e-01, 2.70000e+01]])

torch_hub知らなかった。便利すぎる。

kotaprojkotaproj

ラズパイにtorchをいれる

ソースコードから、ビルドすると時間がかかるのでwheelでインストールする

参考url

torchインストールメモ

 $ history
  214  sudo apt install libopenblas-dev libblas-dev m4 cmake cython python3-dev python3-yaml python3-setuptools
  218  git clone https://github.com/Ben-Faessler/Python3-Wheels.git
  219  python3 -m venv env_torch
  220* source env_torch/bin/activa
  222  pip install ./Python3-Wheels/pytorch/torch-1.5.0a0+4ff3872-cp37-cp37m-linux_armv7l.whl 
  223  pip install ./Python3-Wheels/torchvision/torchvision-0.6.0a0+b68adcf-cp37-cp37m-linux_armv7l.whl 
  224  pip install ./Python3-Wheels/torchvision/torchaudio-0.5.0a0+09494ea-cp37-cp37m-linux_armv7l.whl 
  225  sudo apt install libopenmpi3
  227  sudo apt install libatlas-base-dev <=ここ
  228  python
  229  history 

*エラーメモ

>>> import torch
Traceback (most recent call last):

Original error was: libf77blas.so.3: cannot open shared object file: No such file or directory

=> 227 sudo apt install libatlas-base-dev
で対応

yolov5をラズパイで動かす->NG

うまくいっていないが、作業記録を残す。

  230  wget https://www.piwheels.org/simple/scipy/scipy-1.5.1-cp37-cp37m-linux_armv7l.whl#sha256=a03df78474a6fefd3322b9fe44fe1e38b26cb2737c2bba0f96233cc02dde04a9
  231  pip install ./scipy-1.5.1-cp37-cp37m-linux_armv7l.whl 
  232  git clone https://github.com/ultralytics/yolov5.git
  233  ls -l
  234  cd yolov5/
  235  ls
  237  pip install opencv-python
  239  pip install pandas
  241  pip install requests
  243  pip install tqdm
  245  pip install pyaml
  247  pip install matplotlib
  249  pip install seaborn

このあとで、

(env_torch) pi@pi4rc:~/study_torch/yolov5 $ python detect.py 
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', nosave=False, project='runs/detect', save_conf=False, save_txt=False, source='data/images', update=False, view_img=False, weights='yolov5s.pt')
requirements: torch>=1.7.0 not found and is required by YOLOv5, attempting auto-update...
  Could not find a version that satisfies the requirement torch>=1.7.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
No matching distribution found for torch>=1.7.0
Traceback (most recent call last):
  File "/home/pi/study_torch/yolov5/utils/general.py", line 110, in check_requirements
    pkg.require(r)
  File "/home/pi/study_torch/env_torch/lib/python3.7/site-packages/pkg_resources/__init__.py", line 900, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/home/pi/study_torch/env_torch/lib/python3.7/site-packages/pkg_resources/__init__.py", line 791, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.VersionConflict: (torch 1.5.0a0+4ff3872 (/home/pi/study_torch/env_torch/lib/python3.7/site-packages), Requirement.parse('torch>=1.7.0'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "detect.py", line 170, in <module>
    check_requirements(exclude=('pycocotools', 'thop'))
  File "/home/pi/study_torch/yolov5/utils/general.py", line 114, in check_requirements
    print(subprocess.check_output(f"pip install '{e.req}'", shell=True).decode())
  File "/usr/lib/python3.7/subprocess.py", line 395, in check_output
    **kwargs).stdout
  File "/usr/lib/python3.7/subprocess.py", line 487, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'pip install 'torch>=1.7.0'' returned non-zero exit status 1.
(env_torch) pi@pi4rc:~/study_torch/yolov5 

となり、torch1.7.0でないとダメ。

=>自前でビルドする????

kotaprojkotaproj

windows - yolov5学習 ->ng

何も考えずに学習を実施

 44 git clone https://github.com/ultralytics/yolov5.git
  45 ls -l
  46 ls
  47 mv .\yolov5 .\yolov5_mask
  48 cd .\yolov5\
  49 ls
  50 python .\train.py
  51 pip install thop
  52 python .\train.py
  53 pip install pycocotools>=2.0
  54 pip install pycocotools
  55 pip install pycocotools-windows
  56 python .\train.py
  57 python .\train.py

python train.pyを実施すると、

    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'pip install 'pycocotools>=2.0'' returned non-zero exit status 1.

で怒られる。
この問題に関しては2つやることあり。

pycocotoolsのインストール(windows)

windowsから入れる場合は、

  55 pip install pycocotools-windows

requirements.txtの修正

# extras --------------------------------------
thop  # FLOPS computation
# pycocotools>=2.0  # COCO mAP

とする。

NGメモ

デフォルトは、cocoデータセットがある前提にコードが進むため、ファイルないよと怒られる
適切なデータを置かないとダメ。

kotaprojkotaproj

ラズパイにtorchをいれる2 - torch1.7.0

ビルドするやり方。

参考url

ベースにしたのは、
https://github.com/Kashu7100/pytorch-armv7l

作業ログ

~/study_torch17build $ history 

  301  mkdir study_torch17build
  302  cd study_torch17build/
  303  python3 -m venv env_t17build
  304  source env_t17build/bin/activate
  305  sudo apt install libopenblas-dev libblas-dev m4 cmake cython python3-dev python3-yaml python3-setuptools

これがひつようかも
sudo apt install libatlas-base-dev

  306  pip3 install Cython
  307  pip3 install --upgrade setuptools
  308  export NO_CUDA=1
  309  export NO_DISTRIBUTED=1
  310  export NO_MKLDNN=1 
  311  export BUILD_TEST=0
  312  export MAX_JOBS=4
  313  git clone https://github.com/pytorch/pytorch --recursive && cd pytorch
  314  ls 
  315  ls -l
  316  cd study_torch17build/
  317  ls
  318  ls -l
  319  rm -rf pytorch/
  320  export NO_CUDA=1
  321  export NO_DISTRIBUTED=1
  322  export NO_MKLDNN=1 
  323  export BUILD_TEST=0
  324  export MAX_JOBS=4
  325  git clone https://github.com/pytorch/pytorch --recursive && cd pytorch
  326  git checkout v1.7.0
  327  git submodule update --init --recursive
  328  source ../env_t17build/
  329  source ../env_t17build/bin/activate
  330  python setup.py bdist_wheel
  331  pip install wheel
  332  python setup.py bdist_wheel
  333  pip install pyyaml
  334  python setup.py bdist_wheel
  335  cd ..
  336  history 
  337  git clone https://github.com/pytorch/vision && cd vision
  338  git checkout v0.8.1
  339  git submodule update --init --recursive
  340  python setup.py bdist_wheel
  341  ls
  342  python
  343  cd ..
  344  ls
  345  cd pytorch/
  346  ls
  347  python
  348  git clone https://github.com/pytorch/vision && cd vision
  349  git checkout v0.8.1
  350  git submodule update --init --recursive
  351  python setup.py bdist_wheel
  352  cd ..
  353  python
  354  python setup.py install && python -c "import torch"
  355  ls
  356  cd dist/
  357  ls
  358  history | grep whl
  359  pip install torch-1.7.0a0-cp37-cp37m-linux_armv7l.whl 
  360  cd ..
  361  python
  362  pip install numpy
  363  python
  364  ls
  365  cd vision/
  366  python setup.py bdist_wheel
  367  cd ../
  368  ls
  369  cd pytorch/
  370  git clone https://github.com/pytorch/vision && cd vision
  371  cd vision/
  372  git checkout v0.8.1
  373  git submodule update --init --recursive
  374  python setup.py bdist_wheel
  375  sudo apt-get install libavformat-dev
  376  python setup.py bdist_wheel
  377  sudo apt install libavformat-dev
  378  python setup.py bdist_wheel
  379  sudo apt-get install git cmake python3 g++ libxerces-c-dev libfox-1.6-dev libgdal-dev libproj-dev libgl2ps-dev
  380  python setup.py bdist_wheel
  381  ls -l
  382  ls -l build/
  383  ls -l build/lib.linux-armv7l-3.7/
  384  ls -l build/lib.linux-armv7l-3.7/torchvision/
  385  python
  386  pip install pillow
  387  python
  388  sudo apt-get install libsdl2-dev libsdl2-image-dev libsdl2-ttf-dev libsdl2-mixer-dev
  389  sudo apt-get install libomxil-bellagio-dev -y
  390  python setup.py bdist_wheel
  391  sudo apt update
  392  sudo apt upgrade
  393  sudo apt install libswscale-dev
  394  python setup.py bdist_wheel
  395  ls
  396  cd dist/
  397  ls
  398  pip install torchvision-0.8.0a0+45f960c-cp37-cp37m-linux_armv7l.whl 
  399  python
  400  history 

kotaprojkotaproj

ラズパイでResNetを動かしてみる

(env_t17build) pi@pi4rc:~/study_torch17build/thub_ssd $ python
Python 3.7.3 (default, Jan 22 2021, 20:04:44) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torchvision
'0.8.0a0+45f960c'
>>> import torch
>>> 
>>> model = torch.hub.load('pytorch/vision:v0.8.0', 'resnet18', pretrained=True)
Downloading: "https://github.com/pytorch/vision/archive/v0.8.0.zip" to /home/pi/.cache/torch/hub/v0.8.0.zip
>>> model.eval()
ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)(fc): Linear(in_features=512, out_features=1000, bias=True)
)
>>> 
>>> # Download an example image from the pytorch website
... import urllib
>>> 
>>> url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
>>> try: urllib.URLopener().retrieve(url, filename)
... except: urllib.request.urlretrieve(url, filename)
... 
('dog.jpg', <http.client.HTTPMessage object at 0xa4722810>)
>>> # sample execution (requires torchvision)
... from PIL import Image
>>> from torchvision import transforms
>>> 
>>> input_image = Image.open(filename)
>>> 
>>> preprocess = transforms.Compose([
...     transforms.Resize(256),
...     transforms.CenterCrop(224),
...     transforms.ToTensor(),
...     transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
... ])
>>> 
>>> input_tensor = preprocess(input_image)
>>> input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
>>> input_batch.size
<built-in method size of Tensor object at 0xa19f2288>
>>> input_batch.size()
torch.Size([1, 3, 224, 224])
>>> with torch.no_grad():
...     output = model(input_batch)
... 

>>> 
>>> print(output[0])
tensor([ 1.5914e-02, -1.5497e+00,  3.2030e-01, -2.0585e+00, -8.5747e-01,-6.5606e-01, -1.8088e+00, -2.9126e+00,  5.6032e-01,  2.5117e+00])
>>> probabilities = torch.nn.functional.softmax(output[0], dim=0)
>>> probabilities
tensor([7.6952e-08, 1.6081e-08, 1.0433e-07, 9.6676e-09, 3.2130e-08, 4.5104e-07,2.0577e-08, 6.3080e-08, 4.4850e-08, 2.4505e-10, 1.2102e-08, 3.9299e-08,
        1.2409e-08, 4.1150e-09, 1.3263e-07, 9.3352e-07])
>>> # Download ImageNet labels
>>> txt_url, txt_filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
>>> urllib.request.urlretrieve(txt_url, txt_filename)
('imagenet_classes.txt', <http.client.HTTPMessage object at 0xa1a6e6b0>)
>>> # Read the categories
... with open("imagenet_classes.txt", "r") as f:
...     categories = [s.strip() for s in f.readlines()]
... 
>>> 
>>> # Show top categories per image
... top5_prob, top5_catid = torch.topk(probabilities, 5)
>>> for i in range(top5_prob.size(0)):
...     print(categories[top5_catid[i]], top5_prob[i].item())
... 
Samoyed 0.8846226930618286
Arctic fox 0.04580509662628174
white wolf 0.04427633434534073
Pomeranian 0.005621347110718489
Great Pyrenees 0.004651992116123438
>>> 

kotaprojkotaproj

yolov5を触る

学習関連

チュートリアル通りに実施

↓の内容でこけた

   2 cd .\study_yolov5_train\
   3 ls
   4 cd .\yolov5\
   5 ls
   6 python .\train.py --img 320 --batch 2 --epochs 5 --data coco128.yaml --weights yolov5s.pt
=>ok
   7 python .\train.py --img 640 --batch 2 --epochs 5 --data coco128.yaml --weights yolov5s.pt
=>error - cudaメモリエラー allocateできない的な
   8 python .\train.py --img 640 --batch 1 --epochs 5 --data coco128.yaml --weights yolov5s.pt
=>ok

メモ:640, 16ぐらいにすると、ページングエラー。メモリが全然足りていないようだ

qiita記事ベースに実施

参考URL

やりかたは理解したので、最後の学習済みデータの実施のところのみ実施。

  • 参考urlよりがグーチョキパー学習済みの重みをダウンロード
    • best_goochokipar_300epoc.pt
  • 下記コマンドで実施
> python detect.py --source 0 --weight best_goochokipar_300epoc.pt
  • エラーになった
エラーの箇所
            # if 'youtube.com/' in url or 'youtu.be/' in url:  # if source is YouTube video

修正したコード(コメントの対象)
            # if 'youtube.com/' in url or 'youtu.be/' in url:  # if source is YouTube video
            #     check_requirements(('pafy', 'youtube_dl'))
            #     import pafy
            #     url = pafy.new(url).getbest(preftype="mp4").url

-> この直前で、urlはintの0となる。(webcam用に0を指定)
 イテラブルではないのでエラー
 そもそもyoutubeは使わないので、コメント。

これをコメントにして、OK。

他の記事ベースに実施

参考URL

コードメモ

コラボラトリ側

from google.colab import drive
drive.mount('/content/drive')

%mkdir study_yolov5
%cd study_yolov5/
!git clone https://github.com/ultralytics/yolov5 
%cd yolov5/
!pip install -qr requirements.txt  # install dependencies (ignore errors)

###---###

import torch
from IPython.display import Image, clear_output  # to display images
from utils.google_utils import gdrive_download  # to download models/datasets

clear_output()
print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))

###---###

!python detect.py --weights yolov5s.pt --img 416 --conf 0.4 --source data/images/
Image(filename='runs/detect/exp2/zidane.jpg', width=600)

###---###

# TensorBoard を起動する
%load_ext tensorboard
%tensorboard --logdir "/content/drive/My Drive/Colab Notebooks/study_yolov5/yolov5/runs"

###---###

!python train.py --img 416 --batch 8 --epochs 10 --data ./data/Mask\ Wearing.v1-416x416-black-padding.yolov5pytorch/data_step2.yaml --cfg yolov5x.yaml --weights yolov5x.pt --nosave --cache --name test_yolov5x

train: "/content/drive/My Drive/Colab Notebooks/study_yolov5/yolov5/data/Mask Wearing.v1-416x416-black-padding.yolov5pytorch/train/images"
val: "/content/drive/My Drive/Colab Notebooks/study_yolov5/yolov5/data/Mask Wearing.v1-416x416-black-padding.yolov5pytorch/train/images"

nc: 2
names: ['mask', 'no-mask']

deepsortとyolov5と組み合わせる

参考url

動かしてみる

> python .\main.py --cam 0 --display

→動いた

カテゴリを変える場合は、

> python .\main.py --cam 0 --display --classes 41

→41 : cup

kotaprojkotaproj

ラズパイ - yolov5 再確認

動いた。

  805  git clone https://github.com/Kashu7100/pytorch-armv7l.git
  807  cd pytorch-armv7l/
  809  pip install ./torch-1.7.0a0-cp37-cp37m-linux_armv7l.whl 
  810  pip install torchvision-0.8.0a0+45f960c-cp37-cp37m-linux_armv7l.whl 
  815  git clone https://github.com/ultralytics/yolov5.git
  816  cd yolov5/
  825  nano requirements.txt 
ーーー
torch, torchvisionをコメント

ーーー
  826  pip install -r requirements.txt 
  827  python detect.py --source data/images/ --weights ../yolov5s.pt --conf 0.4

→動いた
参考:https://qiita.com/rokurorock/items/4f07e9c16f6a5297d0aa

kotaprojkotaproj

bluetooth

pi@pi4rc:~ $ python3
Python 3.7.3 (default, Jan 22 2021, 20:04:44) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> cp = subprocess.run(['hcitool', 'name', 'xx:xx:xx:xx:xx:xx'], encoding='utf-8', stdout=subprocess.PIPE)
>>> cp
CompletedProcess(args=['hcitool', 'name', 'xx:xx:xx:xx:xx:xx'], returncode=0, stdout='iPhone\n')
>>> cp.stdout
'iPhone\n'

subprocess

https://qiita.com/tanabe13f/items/8d5e4e5350d217dec8f5