👶

Prometheus & Grafana on k3s on Raspberry Pi OS

2022/01/14に公開約6,700字

Install k3s on Raspberry Pi OS

this article is setting k3s on pi4.

node

pi@pi4:~/git/pi4 $ uname -a
Linux pi4 5.10.63-v7l+ #1459 SMP Wed Oct 6 16:41:57 BST 2021 armv7l GNU/Linux

edit /boot/cmdline.txt

 sudo vi /boot/cmdline.txt

add "cgroup_memory=1 cgroup_enable=memory"

console=tty1 console=serial0,115200 root=PARTUUID=7605b8cd-02 rootfstype=ext4 fsck.repair=yes rootwait quiet splash plymouth.ignore-serial-consoles cgroup_memory=1 cgroup_enable=memory

reboot

reboot

check

check created node status.

sudo kubectl get node

output

NAME   STATUS   ROLES                  AGE   VERSION
pi4    Ready    control-plane,master   15m   v1.22.5+k3s1

Fixed IP of pi4

The following add to /etc/dhcpcd.conf and reboot.

# Wired LAN
interface eth0
static ip_address=192.168.11.33/24

# Wifi
interface wlan0
static ip_address=192.168.11.33/24
static routers=192.168.11.1
static domain_name_servers=192.168.11.1

install NFS Server

sudo apt update
sudo apt install nfs-kernel-server
sudo mkdir /rjj-storage/k3s/prometheus -p
sudo chown nobody:nogroup /rjj-storage/k3s/prometheus
sudo sed -i -e '$a /rjj-storage/k3s/prometheus 192.168.11.0/24(rw,no_root_squash)' /etc/exports
sudo systemctl restart nfs-kernel-server

install prometheus

Refarence the following link.

https://github.com/ji-ryoo/prometheus_k3s

Edit yaml

These yaml files are necessary to edit for your environment, and copy & paste.
kubectl create ns prometheus
cat << "EOF" | kubectl apply -f -
---
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-class: slow
  name: prometheus-pv
  namespace: prometheus
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 5Gi
  nfs:
    path: /rjj-storage/k3s/prometheus
    server: 192.168.11.33
  persistentVolumeReclaimPolicy: Recycle
  volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-class: slow
  labels:
    app: prometheus
  name: prometheus-pvc
  namespace: prometheus
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  volumeMode: Filesystem
  volumeName: prometheus-pv
---
apiVersion: v1
data:
  prometheus.yml: "# my global config\r\r\nglobal:\r\r\n  scrape_interval:     15s
    # Set the scrape interval to every 15 seconds. Default is every 1 minute.\r\r\n
    \ evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every
    1 minute.\r\r\n  # scrape_timeout is set to the global default (10s).\r\r\n\r\r\n#
    Alertmanager configuration\r\r\nalerting:\r\r\n  alertmanagers:\r\r\n  - static_configs:\r\r\n
    \   - targets:\r\r\n      # - alertmanager:9093\r\r\n\r\r\n# Load rules once and
    periodically evaluate them according to the global 'evaluation_interval'.\r\r\nrule_files:\r\r\n
    \ # - \"first_rules.yml\"\r\r\n  # - \"second_rules.yml\"\r\r\n\r\r\n# A scrape
    configuration containing exactly one endpoint to scrape:\r\r\n# Here it's Prometheus
    itself.\r\r\nscrape_configs:\r\r\n  # The job name is added as a label `job=<job_name>`
    to any timeseries scraped from this config.\r\r\n  - job_name: 'prometheus'\r\r\n\r\r\n
    \   # metrics_path defaults to '/metrics'\r\r\n    # scheme defaults to 'http'.\r\r\n\r\r\n
    \   static_configs:\r\r\n    - targets: ['localhost:9090']\r\r\n"
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: prometheus-config
  namespace: prometheus
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: prometheus-node-exporter
  namespace: prometheus
spec:
  selector:
    matchLabels:
      app: prometheus-node-exporter
  template:
    metadata:
      labels:
        app: prometheus-node-exporter
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '9100'
    spec:
      containers:
        - name: prometheus-node-exporter
          image: rycus86/prometheus-node-exporter
          ports:
          - containerPort: 9100
      hostNetwork: true
      hostPID: true
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: prometheus
  name: prometheus
  namespace: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: prometheus
    spec:
      containers:
      - image: rycus86/prometheus
        name: prometheus
        ports:
        - containerPort: 9090
        resources: {}
        volumeMounts:
        - name: prometheus-config
          mountPath: /etc/prometheus
        - mountPath: /prometheus
          name: prometheus-pvc
      volumes:
        - name: prometheus-pvc
          persistentVolumeClaim:
            claimName: prometheus-pvc
        - name: prometheus-config
          configMap:
            name: prometheus-config
            items:
            - key: prometheus.yml
              path: prometheus.yml
status: {}
---
apiVersion: v1
kind: Service
metadata:
  name: prometheus-svc
  namespace: prometheus
spec:
  ports:
  - port: 9090
    targetPort: 9090
  selector:
    app: prometheus
  type: NodePort
EOF

get GrafanaURL

access to ${GRAFANA_URL} by your browser.

export GRAFANA_NORD_PORT=$(kubectl get svc -n prometheus -o jsonpath={".items[*].spec.ports[0].nodePort"})
export GRAFANA_IP=$(hostname -I | cut -f1 -d' ')
export GRAFANA_URL="http://${GRAFANA_IP}:${GRAFANA_NORD_PORT}"
echo ${GRAFANA_URL}

TODO

Grafana

https://github.com/ji-ryoo/grafana_k3s

REF

delete

利用できるか確認中。

install MetalLB on k8s

disabel service elb on k8s.
https://metallb.universe.tf/configuration/k3s/
```bash
curl -sfL https://get.k3s.io | sh -s - --disable servicelb

install MetalLB on k8s.

sudo kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
sudo kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml

Layer 2 Configuration

https://metallb.universe.tf/configuration/#layer-2-configuration
cat << "EOF" | sudo kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.11.33-192.168.11.33
EOF

Discussion

ログインするとコメントできます