🎗️

【timm】Summary of timm model Suffixes

2024/04/30に公開

1. What's timm

timm is a deep-learning library created by Ross Wightman and is a collection of SOTA computer vision models, layers, ulitities, optimizers, schedulers, data-loaders, augmentation and also training/validating scripts with ability to reproduce ImageNet training result.

Quote: timmdocs

2. What's Suffixes

Pretrained timm model has complex name like efficientvit_b1.r224_in1k, So this time, break down meaning of each suffix(due to large number, I will explain only genellary used).

3. Suffixes

・First some alphabets like a1,cc,l1 and the others
Indicating "tweak" of model, like different version, layer count, preprocess, configration, setting, normalization or others.

1k, 21k, etc
This typically indicates that the model was pre-tarined on the ImageNet-1K dataset, which consists of 1000 classes(21k consists 2100 classes).

in1k, in21k, etc
Almost same as above, typically indicates that the model was pre-tarined or fine-tuned on the 1K(or another) dataset.

sw_in1k, sw_in21k, etc
Almost same as above, indicate the model was trained "supervised with".

ft_in1k, ft_in21k, etc
This indicates taht tahe model has been fine-tuned on a specific daatset(like in1k or in22k), usually after being pre-tarined on a broader dataset.

224, 256, 384, '448', '512', etc
Indicates the standard image resolution for input data. Large input resolution may lead to better accuracy but requires more computation.

tn
Tiny.
s
Small or slim.
m
Middle.
l
Large.
xl
Extra large.
ra
Random argumentation.
d
Different model architecture.
tf
Trained in tensorflow.
blur
Probably, model usig "blur pooling"

timm has so many model, therefore, it may have different meaning as same character. Please use this as a refference only.

4. Summary

In this article, I explained about suffixes of timm.
It's over. Thank you for reading.

Reference

fast.ai. timmdocs

Discussion