🦁

helmで展開されたkubernetesリソースを確認してみる

に公開

動機

helmでアプリケーションを入れる時、内部で何が起きているのか知る。

方法

helm get manifest <release name>で可能だそう。

helm get manifest xxx-web
---
# Source: nginx/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: xxx-web-nginx
  namespace: "default"
  labels:
    app.kubernetes.io/name: nginx
    helm.sh/chart: nginx-12.0.4
    app.kubernetes.io/instance: xxx-web
    app.kubernetes.io/managed-by: Helm
  annotations:
spec:
  type: LoadBalancer
  sessionAffinity: None
  externalTrafficPolicy: "Cluster"
  ports:
    - name: http
      port: 80
      targetPort: http
  selector:
    app.kubernetes.io/name: nginx
    app.kubernetes.io/instance: xxx-web
---
# Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: xxx-web-nginx
  namespace: "default"
  labels:
    app.kubernetes.io/name: nginx
    helm.sh/chart: nginx-12.0.4
    app.kubernetes.io/instance: xxx-web
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  strategy:
    rollingUpdate: {}
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/name: nginx
      app.kubernetes.io/instance: xxx-web
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nginx
        helm.sh/chart: nginx-12.0.4
        app.kubernetes.io/instance: xxx-web
        app.kubernetes.io/managed-by: Helm
      annotations:
    spec:
      
      automountServiceAccountToken: false
      shareProcessNamespace: false
      serviceAccountName: default
      affinity:
        podAffinity:
          
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: nginx
                    app.kubernetes.io/instance: xxx-web
                namespaces:
                  - "default"
                topologyKey: kubernetes.io/hostname
              weight: 1
        nodeAffinity:
          
      hostNetwork: false
      hostIPC: false
      initContainers:
      containers:
        - name: nginx
          image: docker.io/bitnami/nginx:1.22.0-debian-11-r3
          imagePullPolicy: "IfNotPresent"
          env:
            - name: BITNAMI_DEBUG
              value: "false"
          envFrom:
          ports:
            - name: http
              containerPort: 8080
          livenessProbe:
            failureThreshold: 6
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
            tcpSocket:
              port: http
          readinessProbe:
            failureThreshold: 3
            initialDelaySeconds: 5
            periodSeconds: 5
            successThreshold: 1
            timeoutSeconds: 3
            tcpSocket:
              port: http
          resources:
            limits: {}
            requests: {}
          volumeMounts:
      volumes:

2025-04-19 追記:helm templateのフラグについて

helm template -hでオプションを確認していた時に、気付いたのですが--dry-runフラグがありました。自分が試した限りでは、templateコマンド自体、実際にデプロイするものではないので、このオプションがある理由については、よく分かりませんでした。

stackoverflowを見る限り、やっぱりhelm template自体がdry-runのような気がします。

helm template is always --dry-run. If you don't specify helm template --validate, then Helm uses a default set of API versions, and in fact renders the chart without contacting a Kubernetes server at all. If the chart includes custom resource definitions (CRDs), helm template without --validate won't complain that they're not being processed. The key important effect of helm template --debug is that, if the template produces invalid YAML, it will get printed out anyways. helm install --dry-run --debug and helm install --validate seem extremely similar, in terms of the options they push into the core installer logic. In both cases they actually render the chart without talking to the Kubernetes server. After doing the render, they do check with the Kubernetes client that the produced YAML is valid for what objects the cluster supports, and they both check whether any of the created objects currently exist in the cluster.

https://stackoverflow.com/questions/65402310/in-helm-3-what-exactly-are-install-dry-run-template-validate-and-lint

上記の説明にもありますが、 --validateは対象k8sクラスタにおけるCRDの有無に関してチェックしてくれるみたいでそっちの方が便利そうですね。参考までに公式ドキュメントの記載も載せておきます。

--validate validate your manifests against the Kubernetes cluster you are currently pointing at. This is the same validation performed on an install

https://helm.sh/docs/helm/helm_template/

追加で確認したい内容

  • Template内のコンテナのvolumesに何も記載がないけど、どういう設定になるんだろう?
  • {}に環境固有の値を入れる理解で正しいんだよね、、、?

Discussion