Open14

データアノテーションから yolov5 の学習まで

kondounagikondounagi

環境構築

$ git clone https://github.com/ultralytics/yolov5
$ conda create --name yolov5 python=3.7
$ conda activate yolov5
$ pip install -r requirements.txt
kondounagikondounagi

尚、GPU環境は以下の通り

$ nvidia-smi
Wed Nov  2 06:23:36 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.75       Driver Version: 517.40       CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA RTX A6000    On   | 00000000:01:00.0 Off |                  Off |
| 30%   30C    P8     8W / 300W |      0MiB / 49140MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA RTX A6000    On   | 00000000:C1:00.0  On |                  Off |
| 30%   32C    P8    21W / 300W |    751MiB / 49140MiB |     15%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
kondounagikondounagi

PyTorch の CUDA version をあわせる

conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia

学習

$ python train.py --img 640 --batch 8 --epochs 3 --data PlateDetectionV0.yaml --weights yolov5n.pt
kondounagikondounagi
CUDA_LAUNCH_BLOCKING=1 python train.py --data PlateDetectionV0.yaml --weights yolov5n.pt --cache ram
kondounagikondounagi
CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=1 python train.py --data PlateDetectionV0.yaml --weights yolov5n.pt --cache ram --epochs 20 --imgsz 320  --hyp runs/evolve/exp3/hyp_evolve.yaml
kondounagikondounagi

dataset.yaml

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/PlateDetectionV1Komaba  # dataset root dir
train: Train.txt  # train images (relative to 'path')
val: Validation.txt  # val images (relative to 'path')
test: Test.txt # Optional

# Classes (ref. obj.names)
names:
  0: 20kg_olympic_bar
  1: 20kg_olympic_rubber_plate
  2: 10kg_olympic_rubber_plate
  3: 5kg_olympic_rubber_plate
  4: 2.5kg_olympic_rubber_plate

# Download script/URL (optional)
download: |
  from utils.general import download, Path
  # Download labels
  segments = False  # segment or box labels
  dir = Path(yaml['path'])  # dataset root dir
  url = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/'
  urls = [url + ('coco2017labels-segments.zip' if segments else 'coco2017labels.zip')]  # labels
  download(urls, dir=dir.parent)

  # Download data
  urls = ['http://images.cocodataset.org/zips/train2017.zip',  # 19G, 118k images
          'http://images.cocodataset.org/zips/val2017.zip',  # 1G, 5k images
          'http://images.cocodataset.org/zips/test2018.zip']  # 7G, 41k images (optional)
  download(urls, dir=dir / 'images', threads=3)
kondounagikondounagi
sudo sed -i "s/archive.ubuntu.com/us.archive.ubuntu.com/" /etc/apt/sources.list
kondounagikondounagi

Train on the best hyper parameters

CUDA_VISIBLE_DEVICES=0 CUDA_LAUNCH_BLOCKING=1 python train.py --data PlateDetectionV1Komaba.yaml --weights yolov5n.pt --cache ram --epochs 20 --imgsz 320
  --hyp runs/evolve/exp6/hyp_evolve.yaml --device 0
kondounagikondounagi

Export as TensorFlowJS

python export.py --weights runs/train/exp21/weights/best.pt --include tfjs