minikubeでHelmを勉強してみた
こんにちは。
ご機嫌いかがでしょうか。
"No human labor is no human error" が大好きな吉井 亮です。
前回 は minikube で Kubernetes クラスターを構築しました。
今回は一歩進んで、パッケージマネージャーである Helm を勉強します。パッケージマネージャーは必須でしょう、おそらく。
Helm のインストール
Mac の場合は以下です。
$ brew install helm
wordpress をデプロイしてみます。
# Helm リポジトリの追加
$ helm repo add bitnami https://charts.bitnami.com/bitnami
# Helm Chart
$ helm install happy-panda bitnami/wordpress
# カスタマイズするなら設定値を記載したファイルを作っておき、読み込ませる
$ helm install happy-panda bitnami/wordpress -f values.yaml
# Namespace を指定する場合
$ helm install happy-panda bitnami/wordpress -f values.yaml -n my-wordpress
Service やら Deployment やらなにやらを自分で書かなくても簡単に Wordpress がデプロイできました。これは便利です。
でも、待ってください。宣言的ではないような気がします。Kubernetes を使うなら宣言的にいきたい。
Helmfile
わざとらしい流れでしたね、失礼しました。
当然ながら宣言的に Helm を使う方法があります。それが Helmfile です。
サクッとインストールします。
$ brew install helmfile
helmfile.yaml
を作っておいて、そのディレクトリで helmfile sync
または helmfile apply
を実行します。
ファイルの書式は こちら を参照ください。
Helmfile でデモ環境を構築
Helmfile の書式を覚えながら、デモ環境を構築してみます。
OpenTelemetry Demo と同じアーキテクチャを構築します。
これ自体は一発で構築できる手順が用意されていますが、勉強のためゼロから Helmfile を書きます。
以下の4つのコンポーネントを構築します。
- デモショッピングサイト
- Otel-collector
- Prometheus
- Grafana
作業ディレクトリはこのような形にしています。helmfile.yaml
と values.yaml
はセットにしています。
.
├── demo
│ └── opentelemetry-demo.yaml
├── grafana
│ ├── helmfile.yaml
│ └── values.yaml
├── otel-collector
│ ├── helmfile.yaml
│ └── values.yaml
└── prometheus
├── helmfile.yaml
└── values.yaml
Grafana
まずは Grafana から。
helmfile.yaml はリポジトリの指定とリリースの指定が基本になりそうです。helm コマンドの引数に渡していた値を宣言してします。
これが欲しかったという感じです。
repositories:
- name: grafana
url: https://grafana.github.io/helm-charts
releases:
- name: grafana
namespace: my-grafana
chart: grafana/grafana
version: 7.2.3
values:
- values.yaml
values.yaml を一から書くのは大変なので、helm コマンドで出力したものをファイルに保存します。
$ helm show values grafana/grafana > values.yaml
values.yaml を自分の環境に合わせて修正します。長いので変更箇所だけ抜粋します。
minikube のデフォルトストレージクラスである standard
を指定しています。
adminPassword は難しいパスワードにしておきましょう。
persistence:
enabled: true
storageClassName: standard
adminPassword: admin
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-server.my-prometheus.svc.cluster.local/
isDefault: true
helmfile と values.yaml ができたら helmfile sync
でデプロイします。
$ helmfile sync
デプロイされていることを確認します。
$ k get pod,deploy,svc -n my-grafana
NAME READY STATUS RESTARTS AGE
pod/grafana-c4ffdb5c-bd6zq 1/1 Running 0 45s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana ClusterIP 10.108.66.12 <none> 80/TCP 45s
Prometheus
次は Prometheus です。
手順は Grafana と同じです。サクサクいきましょう。
repositories:
- name: prometheus-community
url: https://prometheus-community.github.io/helm-charts
releases:
- name: prometheus
namespace: my-prometheus
chart: prometheus-community/prometheus
version: 25.9.0
values:
- values.yaml
values.yaml を作ります。
$ helm show values prometheus-community/prometheus > values.yaml
自分の環境に合わせて修正します。
server:
persistentVolume:
storageClass: "standard"
serverFiles:
prometheus.yml:
scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 10s
static_configs:
- targets:
- otel-collector-opentelemetry-collector.my-otel.svc.cluster.local:8888
デプロイと確認です。
$ helmfile sync
$ k get pod,deploy,svc -n my-prometheus
NAME READY STATUS RESTARTS AGE
pod/prometheus-alertmanager-0 1/1 Running 0 18h
pod/prometheus-kube-state-metrics-745b475957-9sgf8 1/1 Running 0 18h
pod/prometheus-prometheus-node-exporter-mdk9m 1/1 Running 0 18h
pod/prometheus-prometheus-pushgateway-6574ff77bb-wzt6b 1/1 Running 0 18h
pod/prometheus-server-85b7d5fd59-cdqt9 2/2 Running 0 18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/prometheus-kube-state-metrics 1/1 1 1 18h
deployment.apps/prometheus-prometheus-pushgateway 1/1 1 1 18h
deployment.apps/prometheus-server 1/1 1 1 18h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/prometheus-alertmanager ClusterIP 10.97.150.88 <none> 9093/TCP 18h
service/prometheus-alertmanager-headless ClusterIP None <none> 9093/TCP 18h
service/prometheus-kube-state-metrics ClusterIP 10.97.102.166 <none> 8080/TCP 18h
service/prometheus-prometheus-node-exporter ClusterIP 10.105.180.218 <none> 9100/TCP 18h
service/prometheus-prometheus-pushgateway ClusterIP 10.100.221.233 <none> 9091/TCP 18h
service/prometheus-server ClusterIP 10.109.115.227 <none> 80/TCP 18h
どうでもいいことですが、prometheus-server は 9090 ではなく 80番ポートで開くようです。
Otel-collector
Otel-collector もサクサクいきます。
repositories:
- name: open-telemetry
url: https://open-telemetry.github.io/opentelemetry-helm-charts
releases:
- name: otel-collector
namespace: my-otel
chart: open-telemetry/opentelemetry-collector
version: 25.9.0
values:
- values.yaml
values.yaml を作ります。
$ helm show values open-telemetry/opentelemetry-collector > values.yaml
自分の環境に合わせて修正します。
mode: "daemonset"
presets:
hostMetrics:
enabled: true
kubernetesAttributes:
enabled: true
extractAllPodLabels: true
extractAllPodAnnotations: true
config:
exporters:
prometheus:
endpoint: ${env:MY_POD_IP}:8889
processors:
filter/ottl:
error_mode: ignore
metrics:
metric:
- 'name == "rpc.server.duration"'
transform:
metric_statements:
- context: metric
statements:
- set(description, "") where name == "queueSize"
- set(description, "") where name == "http.client.duration"
memory_limiter:
check_interval: 5s
limit_percentage: 80
spike_limit_percentage: 25
k8sattributes:
extract:
metadata:
- k8s.namespace.name
- k8s.deployment.name
- k8s.statefulset.name
- k8s.daemonset.name
- k8s.cronjob.name
- k8s.job.name
- k8s.node.name
- k8s.pod.name
- k8s.pod.uid
- k8s.pod.start_time
passthrough: false
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection
receivers:
otlp:
protocols:
grpc:
endpoint: ${env:MY_POD_IP}:4317
http:
endpoint: ${env:MY_POD_IP}:4318
prometheus:
config:
scrape_configs:
- job_name: opentelemetry-collector
scrape_interval: 10s
static_configs:
- targets:
- ${env:MY_POD_IP}:8888
service:
pipelines:
metrics:
exporters:
- debug
- prometheus
processors:
- memory_limiter
- filter/ottl
- transform
- batch
receivers:
- otlp
- prometheus
traces:
exporters:
- debug
processors:
- memory_limiter
- batch
receivers:
- otlp
clusterRole:
create: true
ports:
metrics:
enabled: true
podMonitor:
enabled: true
serviceMonitor:
enabled: true
containerPort: 8888
servicePort: 8888
protocol: TCP
prometheus:
enabled: true
containerPort: 8889
servicePort: 8889
protocol: TCP
podMonitor:
enabled: true
serviceMonitor:
enabled: true
デプロイと確認です。
$ helmfile sync
$ k get pod,deploy,svc -n my-otel
NAME READY STATUS RESTARTS AGE
pod/otel-collector-opentelemetry-collector-5569bb56d-42z4s 1/1 Running 2 (17h ago) 18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/otel-collector-opentelemetry-collector 1/1 1 1 20h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/otel-collector-opentelemetry-collector ClusterIP 10.106.237.55 <none> 6831/UDP,14250/TCP,14268/TCP,8888/TCP,4317/TCP,4318/TCP,9411/TCP 20h
デモショッピングサイト
デモサイトは opentelemetry-demo をそのまま使います。
kubernetes/opentelemetry-demo.yaml からサイトに必要なコンポーネントだけ抜き出して kubernetes create
します。
クリックして展開
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
# This file is generated by 'make generate-kubernetes-manifests'
# https://github.com/open-telemetry/opentelemetry-demo/blob/main/kubernetes/opentelemetry-demo.yaml
---
# Source: opentelemetry-demo/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: opentelemetry-demo
labels:
opentelemetry.io/name: opentelemetry-demo
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/name: opentelemetry-demo
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-adservice
labels:
opentelemetry.io/name: opentelemetry-demo-adservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: adservice
app.kubernetes.io/name: opentelemetry-demo-adservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-adservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-cartservice
labels:
opentelemetry.io/name: opentelemetry-demo-cartservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: cartservice
app.kubernetes.io/name: opentelemetry-demo-cartservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-cartservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-checkoutservice
labels:
opentelemetry.io/name: opentelemetry-demo-checkoutservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: checkoutservice
app.kubernetes.io/name: opentelemetry-demo-checkoutservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-checkoutservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-currencyservice
labels:
opentelemetry.io/name: opentelemetry-demo-currencyservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: currencyservice
app.kubernetes.io/name: opentelemetry-demo-currencyservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-currencyservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-emailservice
labels:
opentelemetry.io/name: opentelemetry-demo-emailservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: emailservice
app.kubernetes.io/name: opentelemetry-demo-emailservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-emailservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-featureflagservice
labels:
opentelemetry.io/name: opentelemetry-demo-featureflagservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: featureflagservice
app.kubernetes.io/name: opentelemetry-demo-featureflagservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 50053
name: grpc
targetPort: 50053
- port: 8081
name: http
targetPort: 8081
selector:
opentelemetry.io/name: opentelemetry-demo-featureflagservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-ffspostgres
labels:
opentelemetry.io/name: opentelemetry-demo-ffspostgres
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: ffspostgres
app.kubernetes.io/name: opentelemetry-demo-ffspostgres
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 5432
name: postgres
targetPort: 5432
selector:
opentelemetry.io/name: opentelemetry-demo-ffspostgres
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-frontend
labels:
opentelemetry.io/name: opentelemetry-demo-frontend
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: frontend
app.kubernetes.io/name: opentelemetry-demo-frontend
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-frontend
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-frontendproxy
labels:
opentelemetry.io/name: opentelemetry-demo-frontendproxy
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: frontendproxy
app.kubernetes.io/name: opentelemetry-demo-frontendproxy
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-frontendproxy
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-kafka
labels:
opentelemetry.io/name: opentelemetry-demo-kafka
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: kafka
app.kubernetes.io/name: opentelemetry-demo-kafka
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 9092
name: plaintext
targetPort: 9092
- port: 9093
name: controller
targetPort: 9093
selector:
opentelemetry.io/name: opentelemetry-demo-kafka
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-loadgenerator
labels:
opentelemetry.io/name: opentelemetry-demo-loadgenerator
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: loadgenerator
app.kubernetes.io/name: opentelemetry-demo-loadgenerator
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8089
name: tcp-service
targetPort: 8089
selector:
opentelemetry.io/name: opentelemetry-demo-loadgenerator
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-paymentservice
labels:
opentelemetry.io/name: opentelemetry-demo-paymentservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: paymentservice
app.kubernetes.io/name: opentelemetry-demo-paymentservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-paymentservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-productcatalogservice
labels:
opentelemetry.io/name: opentelemetry-demo-productcatalogservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: productcatalogservice
app.kubernetes.io/name: opentelemetry-demo-productcatalogservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-productcatalogservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-quoteservice
labels:
opentelemetry.io/name: opentelemetry-demo-quoteservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: quoteservice
app.kubernetes.io/name: opentelemetry-demo-quoteservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-quoteservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-recommendationservice
labels:
opentelemetry.io/name: opentelemetry-demo-recommendationservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: recommendationservice
app.kubernetes.io/name: opentelemetry-demo-recommendationservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-recommendationservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-redis
labels:
opentelemetry.io/name: opentelemetry-demo-redis
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: redis
app.kubernetes.io/name: opentelemetry-demo-redis
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 6379
name: redis
targetPort: 6379
selector:
opentelemetry.io/name: opentelemetry-demo-redis
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-demo-shippingservice
labels:
opentelemetry.io/name: opentelemetry-demo-shippingservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: shippingservice
app.kubernetes.io/name: opentelemetry-demo-shippingservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
type: ClusterIP
ports:
- port: 8080
name: tcp-service
targetPort: 8080
selector:
opentelemetry.io/name: opentelemetry-demo-shippingservice
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-accountingservice
labels:
opentelemetry.io/name: opentelemetry-demo-accountingservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: accountingservice
app.kubernetes.io/name: opentelemetry-demo-accountingservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-accountingservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-accountingservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: accountingservice
app.kubernetes.io/name: opentelemetry-demo-accountingservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: accountingservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-accountingservice'
imagePullPolicy: IfNotPresent
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: KAFKA_SERVICE_ADDR
value: 'opentelemetry-demo-kafka:9092'
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 20Mi
initContainers:
- command:
- sh
- -c
- until nc -z -v -w30 opentelemetry-demo-kafka 9092; do echo waiting
for kafka; sleep 2; done;
image: busybox:latest
name: wait-for-kafka
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-adservice
labels:
opentelemetry.io/name: opentelemetry-demo-adservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: adservice
app.kubernetes.io/name: opentelemetry-demo-adservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-adservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-adservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: adservice
app.kubernetes.io/name: opentelemetry-demo-adservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: adservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-adservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: AD_SERVICE_PORT
value: "8080"
- name: FEATURE_FLAG_GRPC_SERVICE_ADDR
value: 'opentelemetry-demo-featureflagservice:50053'
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_LOGS_EXPORTER
value: otlp
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 300Mi
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-cartservice
labels:
opentelemetry.io/name: opentelemetry-demo-cartservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: cartservice
app.kubernetes.io/name: opentelemetry-demo-cartservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-cartservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-cartservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: cartservice
app.kubernetes.io/name: opentelemetry-demo-cartservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: cartservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-cartservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: CART_SERVICE_PORT
value: "8080"
- name: ASPNETCORE_URLS
value: http://*:$(CART_SERVICE_PORT)
- name: FEATURE_FLAG_GRPC_SERVICE_ADDR
value: 'opentelemetry-demo-featureflagservice:50053'
- name: REDIS_ADDR
value: 'opentelemetry-demo-redis:6379'
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 160Mi
initContainers:
- command:
- sh
- -c
- until nc -z -v -w30 opentelemetry-demo-redis 6379; do echo waiting
for redis; sleep 2; done;
image: busybox:latest
name: wait-for-redis
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-checkoutservice
labels:
opentelemetry.io/name: opentelemetry-demo-checkoutservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: checkoutservice
app.kubernetes.io/name: opentelemetry-demo-checkoutservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-checkoutservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-checkoutservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: checkoutservice
app.kubernetes.io/name: opentelemetry-demo-checkoutservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: checkoutservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-checkoutservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: CHECKOUT_SERVICE_PORT
value: "8080"
- name: CART_SERVICE_ADDR
value: 'opentelemetry-demo-cartservice:8080'
- name: CURRENCY_SERVICE_ADDR
value: 'opentelemetry-demo-currencyservice:8080'
- name: EMAIL_SERVICE_ADDR
value: http://opentelemetry-demo-emailservice:8080
- name: PAYMENT_SERVICE_ADDR
value: 'opentelemetry-demo-paymentservice:8080'
- name: PRODUCT_CATALOG_SERVICE_ADDR
value: 'opentelemetry-demo-productcatalogservice:8080'
- name: SHIPPING_SERVICE_ADDR
value: 'opentelemetry-demo-shippingservice:8080'
- name: KAFKA_SERVICE_ADDR
value: 'opentelemetry-demo-kafka:9092'
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 20Mi
initContainers:
- command:
- sh
- -c
- until nc -z -v -w30 opentelemetry-demo-kafka 9092; do echo waiting
for kafka; sleep 2; done;
image: busybox:latest
name: wait-for-kafka
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-currencyservice
labels:
opentelemetry.io/name: opentelemetry-demo-currencyservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: currencyservice
app.kubernetes.io/name: opentelemetry-demo-currencyservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-currencyservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-currencyservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: currencyservice
app.kubernetes.io/name: opentelemetry-demo-currencyservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: currencyservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-currencyservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: CURRENCY_SERVICE_PORT
value: "8080"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 20Mi
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-emailservice
labels:
opentelemetry.io/name: opentelemetry-demo-emailservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: emailservice
app.kubernetes.io/name: opentelemetry-demo-emailservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-emailservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-emailservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: emailservice
app.kubernetes.io/name: opentelemetry-demo-emailservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: emailservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-emailservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: EMAIL_SERVICE_PORT
value: "8080"
- name: APP_ENV
value: production
- name: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4318/v1/traces
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 100Mi
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-featureflagservice
labels:
opentelemetry.io/name: opentelemetry-demo-featureflagservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: featureflagservice
app.kubernetes.io/name: opentelemetry-demo-featureflagservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-featureflagservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-featureflagservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: featureflagservice
app.kubernetes.io/name: opentelemetry-demo-featureflagservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: featureflagservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-featureflagservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 50053
name: grpc
- containerPort: 8081
name: http
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: FEATURE_FLAG_SERVICE_PORT
value: "8081"
- name: FEATURE_FLAG_GRPC_SERVICE_PORT
value: "50053"
- name: DATABASE_URL
value: ecto://ffs:ffs@opentelemetry-demo-ffspostgres:5432/ffs
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_EXPORTER_OTLP_TRACES_PROTOCOL
value: grpc
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 175Mi
livenessProbe:
httpGet:
path: /featureflags/
port: 8081
initialDelaySeconds: 30
periodSeconds: 10
initContainers:
- command:
- sh
- -c
- until nc -z -v -w30 opentelemetry-demo-ffspostgres 5432; do echo
waiting for ffspostgres; sleep 2; done
image: busybox:latest
name: wait-for-ffspostgres
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-ffspostgres
labels:
opentelemetry.io/name: opentelemetry-demo-ffspostgres
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: ffspostgres
app.kubernetes.io/name: opentelemetry-demo-ffspostgres
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-ffspostgres
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-ffspostgres
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: ffspostgres
app.kubernetes.io/name: opentelemetry-demo-ffspostgres
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: ffspostgres
image: 'postgres:16.1'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5432
name: postgres
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: POSTGRES_DB
value: ffs
- name: POSTGRES_USER
value: ffs
- name: POSTGRES_PASSWORD
value: ffs
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 120Mi
securityContext:
runAsGroup: 999
runAsNonRoot: true
runAsUser: 999
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-frauddetectionservice
labels:
opentelemetry.io/name: opentelemetry-demo-frauddetectionservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: frauddetectionservice
app.kubernetes.io/name: opentelemetry-demo-frauddetectionservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-frauddetectionservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-frauddetectionservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: frauddetectionservice
app.kubernetes.io/name: opentelemetry-demo-frauddetectionservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: frauddetectionservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-frauddetectionservice'
imagePullPolicy: IfNotPresent
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: KAFKA_SERVICE_ADDR
value: 'opentelemetry-demo-kafka:9092'
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 200Mi
initContainers:
- command:
- sh
- -c
- until nc -z -v -w30 opentelemetry-demo-kafka 9092; do echo waiting
for kafka; sleep 2; done;
image: busybox:latest
name: wait-for-kafka
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-frontend
labels:
opentelemetry.io/name: opentelemetry-demo-frontend
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: frontend
app.kubernetes.io/name: opentelemetry-demo-frontend
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-frontend
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-frontend
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: frontend
app.kubernetes.io/name: opentelemetry-demo-frontend
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: frontend
image: 'ghcr.io/open-telemetry/demo:1.7.0-frontend'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: FRONTEND_PORT
value: "8080"
- name: FRONTEND_ADDR
value: :8080
- name: AD_SERVICE_ADDR
value: 'opentelemetry-demo-adservice:8080'
- name: CART_SERVICE_ADDR
value: 'opentelemetry-demo-cartservice:8080'
- name: CHECKOUT_SERVICE_ADDR
value: 'opentelemetry-demo-checkoutservice:8080'
- name: CURRENCY_SERVICE_ADDR
value: 'opentelemetry-demo-currencyservice:8080'
- name: PRODUCT_CATALOG_SERVICE_ADDR
value: 'opentelemetry-demo-productcatalogservice:8080'
- name: RECOMMENDATION_SERVICE_ADDR
value: 'opentelemetry-demo-recommendationservice:8080'
- name: SHIPPING_SERVICE_ADDR
value: 'opentelemetry-demo-shippingservice:8080'
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: WEB_OTEL_SERVICE_NAME
value: frontend-web
- name: PUBLIC_OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
value: http://localhost:8080/otlp-http/v1/traces
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 200Mi
securityContext:
runAsGroup: 1001
runAsNonRoot: true
runAsUser: 1001
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-frontendproxy
labels:
opentelemetry.io/name: opentelemetry-demo-frontendproxy
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: frontendproxy
app.kubernetes.io/name: opentelemetry-demo-frontendproxy
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-frontendproxy
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-frontendproxy
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: frontendproxy
app.kubernetes.io/name: opentelemetry-demo-frontendproxy
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: frontendproxy
image: 'ghcr.io/open-telemetry/demo:1.7.0-frontendproxy'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: ENVOY_PORT
value: "8080"
- name: FRONTEND_PORT
value: "8080"
- name: FRONTEND_HOST
value: 'opentelemetry-demo-frontend'
- name: FEATURE_FLAG_SERVICE_PORT
value: "8081"
- name: FEATURE_FLAG_SERVICE_HOST
value: 'opentelemetry-demo-featureflagservice'
- name: LOCUST_WEB_PORT
value: "8089"
- name: LOCUST_WEB_HOST
value: 'opentelemetry-demo-loadgenerator'
- name: GRAFANA_SERVICE_PORT
value: "80"
- name: GRAFANA_SERVICE_HOST
value: 'opentelemetry-demo-grafana'
- name: JAEGER_SERVICE_PORT
value: "16686"
- name: JAEGER_SERVICE_HOST
value: 'opentelemetry-demo-jaeger-query'
- name: OTEL_COLLECTOR_PORT_GRPC
value: "4317"
- name: OTEL_COLLECTOR_PORT_HTTP
value: "4318"
- name: OTEL_COLLECTOR_HOST
value: $(OTEL_COLLECTOR_NAME)
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 50Mi
securityContext:
runAsGroup: 101
runAsNonRoot: true
runAsUser: 101
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-kafka
labels:
opentelemetry.io/name: opentelemetry-demo-kafka
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: kafka
app.kubernetes.io/name: opentelemetry-demo-kafka
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-kafka
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-kafka
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: kafka
app.kubernetes.io/name: opentelemetry-demo-kafka
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: kafka
image: 'ghcr.io/open-telemetry/demo:1.7.0-kafka'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
name: plaintext
- containerPort: 9093
name: controller
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://opentelemetry-demo-kafka:9092
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: KAFKA_HEAP_OPTS
value: -Xmx200M -Xms200M
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 500Mi
securityContext:
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-loadgenerator
labels:
opentelemetry.io/name: opentelemetry-demo-loadgenerator
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: loadgenerator
app.kubernetes.io/name: opentelemetry-demo-loadgenerator
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-loadgenerator
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-loadgenerator
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: loadgenerator
app.kubernetes.io/name: opentelemetry-demo-loadgenerator
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: loadgenerator
image: 'ghcr.io/open-telemetry/demo:1.7.0-loadgenerator'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8089
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: LOCUST_WEB_PORT
value: "8089"
- name: LOCUST_USERS
value: "10"
- name: LOCUST_SPAWN_RATE
value: "1"
- name: LOCUST_HOST
value: http://opentelemetry-demo-frontendproxy:8080
- name: LOCUST_HEADLESS
value: "false"
- name: LOCUST_AUTOSTART
value: "true"
- name: PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION
value: python
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 120Mi
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-paymentservice
labels:
opentelemetry.io/name: opentelemetry-demo-paymentservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: paymentservice
app.kubernetes.io/name: opentelemetry-demo-paymentservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-paymentservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-paymentservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: paymentservice
app.kubernetes.io/name: opentelemetry-demo-paymentservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: paymentservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-paymentservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: PAYMENT_SERVICE_PORT
value: "8080"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 120Mi
securityContext:
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-productcatalogservice
labels:
opentelemetry.io/name: opentelemetry-demo-productcatalogservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: productcatalogservice
app.kubernetes.io/name: opentelemetry-demo-productcatalogservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-productcatalogservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-productcatalogservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: productcatalogservice
app.kubernetes.io/name: opentelemetry-demo-productcatalogservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: productcatalogservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-productcatalogservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: PRODUCT_CATALOG_SERVICE_PORT
value: "8080"
- name: FEATURE_FLAG_GRPC_SERVICE_ADDR
value: 'opentelemetry-demo-featureflagservice:50053'
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 20Mi
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-quoteservice
labels:
opentelemetry.io/name: opentelemetry-demo-quoteservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: quoteservice
app.kubernetes.io/name: opentelemetry-demo-quoteservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-quoteservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-quoteservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: quoteservice
app.kubernetes.io/name: opentelemetry-demo-quoteservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: quoteservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-quoteservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: QUOTE_SERVICE_PORT
value: "8080"
- name: OTEL_PHP_AUTOLOAD_ENABLED
value: "true"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4318
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 40Mi
securityContext:
runAsGroup: 33
runAsNonRoot: true
runAsUser: 33
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-recommendationservice
labels:
opentelemetry.io/name: opentelemetry-demo-recommendationservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: recommendationservice
app.kubernetes.io/name: opentelemetry-demo-recommendationservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-recommendationservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-recommendationservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: recommendationservice
app.kubernetes.io/name: opentelemetry-demo-recommendationservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: recommendationservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-recommendationservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: RECOMMENDATION_SERVICE_PORT
value: "8080"
- name: PRODUCT_CATALOG_SERVICE_ADDR
value: 'opentelemetry-demo-productcatalogservice:8080'
- name: FEATURE_FLAG_GRPC_SERVICE_ADDR
value: 'opentelemetry-demo-featureflagservice:50053'
- name: OTEL_PYTHON_LOG_CORRELATION
value: "true"
- name: PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION
value: python
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 500Mi
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-redis
labels:
opentelemetry.io/name: opentelemetry-demo-redis
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: redis
app.kubernetes.io/name: opentelemetry-demo-redis
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-redis
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-redis
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: redis
app.kubernetes.io/name: opentelemetry-demo-redis
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: redis
image: 'redis:7.2-alpine'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 6379
name: redis
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 20Mi
securityContext:
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 999
---
# Source: opentelemetry-demo/templates/component.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-demo-shippingservice
labels:
opentelemetry.io/name: opentelemetry-demo-shippingservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: shippingservice
app.kubernetes.io/name: opentelemetry-demo-shippingservice
app.kubernetes.io/version: "1.7.0"
app.kubernetes.io/part-of: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
opentelemetry.io/name: opentelemetry-demo-shippingservice
template:
metadata:
labels:
opentelemetry.io/name: opentelemetry-demo-shippingservice
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/component: shippingservice
app.kubernetes.io/name: opentelemetry-demo-shippingservice
spec:
serviceAccountName: opentelemetry-demo
containers:
- name: shippingservice
image: 'ghcr.io/open-telemetry/demo:1.7.0-shippingservice'
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: service
env:
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.labels['app.kubernetes.io/component']
- name: OTEL_COLLECTOR_NAME
value: 'otel-collector-opentelemetry-collector.my-otel.svc.cluster.local'
- name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE
value: cumulative
- name: SHIPPING_SERVICE_PORT
value: "8080"
- name: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT
value: http://$(OTEL_COLLECTOR_NAME):4317/v1/traces
- name: OTEL_RESOURCE_ATTRIBUTES
value: service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo
resources:
limits:
memory: 20Mi
デモサイトをデプロイしたらブラウザで覗いてみます。
買い物はできませんが、適当にカートに入れたり商品をクリックしたりしてみましょう。
$ minikube service opentelemetry-demo-frontend
Grafana でメトリクスを確認
しばらく動かしているとメトリクスが Grafana に表示されるようになります。
ブラウザで Grafana を開いてみましょう。
$ minikube service grafana -n my-grafana
左ペインのメニュ-から Home → Explore を開きます。
メトリクスはなんでもよいのですが kube_pod_start_time を選んでみます。
続いて system_cpu_time_seconds_total を見ます。これは OpenTelemetry が取得しているメトリクスです。
まとめ
Helm と Helmfile を勉強してみました。かなり宣言的な人間になりました。
Helm Charts は便利な半面、パラメータ全てを理解したうえで使いこなさないとならないと感じました。
参考
OpenTelemetry Demo Documentation
opentelemetry-demo
opentelemetry-helm-charts
Prometheus Configuration
Prometheus helm-charts
Helmfile
Deploy Grafana on Kubernetes
Discussion