iTranslated by AI

The content below is an AI-generated translation. This is an experimental feature, and may contain errors. View original article
🐶

How to Run kohya-ss LoRA on Paperspace

に公開

Introduction

https://console.paperspace.com/signup?R=ZS1N2LM
If you start Paperspace from this link, you will receive $10 in credit.

https://onlinegamernikki.com/paperspace_kohyaver_lora_tutorial
I followed this article.
I made some changes to the code for my own use.

I wrote this for record-keeping purposes as a memorandum, so some parts may be difficult to understand.

The setup and startup of Paperspace are explained with images on the reference site, so it may be easier to understand there.

Subscribing to PaperSpace Pro

It is a bit confusing, but there are pay-as-you-go and monthly plans.
Pay-as-you-go can be used on an hourly basis.
If you subscribe to a monthly plan, "free" will be displayed in the GPU section.

Pro is available for $8 per month. You can use GPUs with up to 24GB of VRAM as many times as you want in 6-hour increments.

*Note: You may not be able to use a GPU if there is no availability.
Here is a quick reference table for GPUs and plans.
GPU
Source: https://qiita.com/kunishou/items/dccb44848e5b572619bc

https://console.paperspace.com/signup?R=ZS1N2LM
If you start Paperspace from this link, you will receive $10 in credit.

Project Creation and Startup

Log in to Paperspace and proceed by referring to the following site.
We will proceed using Gradient.
Please be careful not to proceed with Core by mistake.
https://onlinegamernikki.com/paperspace_kohyaver_lora_tutorial#toc3

Installation of LoRA

WARNING: Running pip as the 'root' user can result in broken permissions and
conflicting behaviour with the system package manager. It is recommended to
use a virtual environment instead: https://pip.pypa.io/warnings/venv

You don't need to worry about this message appearing during pip.

mkdir LoRA
cd LoRA
sudo apt update -y && sudo apt upgrade -y
git clone https://github.com/kohya-ss/sd-scripts.git
git clone https://github.com/derrian-distro/LoRA_Easy_Training_Scripts.git
cd sd-scripts
apt -y install python3.10
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -U -r requirements.txt
pip install -U --pre triton
pip install -U -I --no-deps xformers-0.0.14.dev0-cp310-cp310-linux_x86_64.whl

// This part could not be installed due to an OSError. If your goal is LoRA training, you might not need to do this.
cd ..
accelerate config
//- This machine (enter)
//- No distributed training (enter)
//- NO (input)
//- NO
//- NO
//- all
//- fp16(enter)

cp LoRA_Easy_Training_Scripts/lora_train_command_line.py sd-scripts
cd sd-scripts

This completes the environment setup. All that's left is to run LoRA.
You can execute it with accelerate launch.
I run it as follows.
Please replace the "[file path]" parts.

accelerate launch --num_cpu_threads_per_process 12 train_network.py --pretrained_model_name_or_path="[file_path.ckpt]" --train_data_dir="[file_path]" --reg_data_dir="[file_path]" --output_dir="[file_path]"--resolution=320,960 --train_batch_size=4 --learning_rate=8e-5 --max_train_epochs=10 --save_every_n_epochs=1 --save_model_as=safetensors --clip_skip=2 --seed=42 --color_aug --network_module=networks.lora --keep_tokens=7 --enable_bucket

In the above case, there are four parts you need to change yourself.

  1. --pretrained_model_name_or_path=
    This is the file path to the .ckpt file of the model you want to perform additional training on.

Example:

--pretrained_model_name_or_path=/notebooks/stable-diffusion-webui/models/Stable-diffusion/Evt_M.ckpt
  1. --train_data_dir=
    Specifies the folder for the training materials.

Example:

--train_data_dir=/notebooks/LoRA/Training

There are naming conventions for folder names, so please refer to this for details.
https://note.com/kohya_ss/n/nba4eceaa4594#578bd471-5d68-4cc8-bb45-9ca8d80ef1ed

  1. --reg_data_dir=
    Specifies the folder for regularization images.

Example:

--reg_data_dir=/notebooks/LoRA/reg
  1. --output_dir=
    Specifies the folder where the training results will be saved.

Example:

--output_dir=/notebooks/LoRA/testLoRA

Startup from the Second Time Onwards

cd LoRA
sudo apt update -y && sudo apt upgrade -y
cd sd-scripts
apt -y install python3.10
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -U -r requirements.txt
pip install -U --pre triton
pip install -U -I --no-deps xformers-0.0.14.dev0-cp310-cp310-linux_x86_64.whl
// This part could not be installed due to an OSError. If your goal is LoRA training, you might not need to do this.
cd ..
accelerate config
cd sd-scripts

Recommended Books for Image Generation

[2023 Latest] Includes over 1,000 types of prompts! Spellbook for NovelAI and Local Use: Tips, Techniques, and Recommended Tools (Introduction for AI Artists) Kindle Edition

https://amzn.to/3YVp90y

Becoming a God-tier Artist through AI Collaboration: Understanding Stable Diffusion from Research Papers
https://amzn.to/41aLQ30

Leading-edge Technology: How to Use AUTOMATIC1111 / Stable Diffusion web UI
https://amzn.to/3Kcjboe

Discussion