🌟

# Anomaly Detection Using Sparse Modeling

2021/03/18に公開

## 1. Introduction

### 1-1. Task Definition

In signal processing, the task of sparse modeling (sparse coding) is to represent input signals with linear combinations of a few dictionary elements.

• Input signal: patches (red, green and blue)
• Linear combinations: coefficients (0.4, 0.1 and 0.9)
• Few dictionary elements: striped pattern boxes in dictionary

• Y : Actual signals. Y.shape -> (num_features, num_patches)
• D : Dictionary. Basis vectors are stored. D.shape -> (num_features, num_basis)
• X : Sparse representation. Coefficients are stored. X.shape -> (num_basis, num_patches)

### 1-2. Apply to Anomaly Detection

To apply sparse modeling to anomaly detection, we have one assumption: "D and X created from normal images can’t represent anomaly images." This is a practical assumption, because D only has minimum elements to represent normal images.

### 1-3. Comparison between Sparse Modeling and Deep Learning

Sparse Modeling Deep Learning
Data Sparse Modeling focuses only on the very essential parts and works with small amounts of information. Deep Learning algorithms need a very large amount of training data to learn and build up a model.
Explainablity Sparse Modeling maintains a transparent model that can be both reviewed and verified. Hence the results are understandable and explainable. The models created in a Deep Learning setup work as a black box. They don’t explain why they return a certain result or decision.
Computational Resource Since Sparse Modeling only consumes minimal power, it can be easily embedded into low-cost equipment or FPGA. The special computer equipment required to process Big Data is expensive.

## 2. Orthogonal Matching Pursuit

This is an algorithm to optimize the sparse representation X.

Question: What is the ideal basis vector d? Answer: The more similar d to Y, the better. Y=d is ideal. The similarity can be measured with inner products of D^TY. The larger inner product, the more similar. Please note that the length of all basis vectors is the same. argmax_|D^T Y| is the index of the most similar basis vector to Y

## 3. K-SVD

K-SVD is an algorithm to optimize dictionary D and sparse representation X.

## 4. Sparse Modeling

for i in range(max_iteration):
X = sparse_encode(Y, D, algorithm="omp")
for j in range(num_basis):
nonzero_index = X[:, j] != 0
X[nonzero_index, j] = 0
error = Y[nonzero_index, :] - np.dot(X[nonzero_index, :], D)
U, S, V = np.linalg.svd(error)
X[nonzero_index, j] = U[:, 0] * S[0]
D[j, :] = V.T[:, 0]


## 5. Model Comparison on MVTec AD Dataset

### 5-2. AUROC Scores

zipper 0.923 0.975 0.743
wood 0.992 0.965 0.934
transistor 0.998 0.918 0.462
toothbrush 0.883 0.972 0.758
tile 0.994 0.997 0.862
screw 0.815 0.799 0.744
pill 0.958 0.786 0.467
metal_nut 0.992 0.920 0.402
leather 1.000 1.000 0.766
hazelnut 0.985 0.890 0.967
grid 0.959 0.983 0.772
carpet 0.997 0.781 0.375
capsule 0.937 0.731 0.577
cable 0.930 0.655 0.408
bottle 1.000 0.971 0.625

### 5-3. PRO30 Scores

zipper 0.935 - 0.727
wood 0.891 - 0.822
transistor 0.949 - 0.659
toothbrush 0.915 - 0.902
tile 0.826 - 0.645
screw 0.936 - 0.951
pill 0.952 - 0.873
metal_nut 0.933 - 0.700
leather 0.978 - 0.768
hazelnut 0.937 - 0.976
grid 0.866 - 0.642
carpet 0.952 - 0.515
capsule 0.921 - 0.796
cable 0.918 - 0.658
bottle 0.951 - 0.705