Open14

k8s hardway@GCP

goforbrokegoforbroke

教材

https://github.com/kelseyhightower/kubernetes-the-hard-way

リージョン

$ gcloud config set compute/region asia-northeast1

Provisioning Compute Resources

k8sクラスタ用のVPCを作る

$ gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom

ネットワーク

サブネットを設定する

$ gcloud compute networks subnets create kubernetes \
  --network kubernetes-the-hard-way \
  --range 10.240.0.0/24

内部のTCP/UDP/ICMPはすべて許可する

$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
  --allow tcp,udp,icmp \
  --network kubernetes-the-hard-way \
  --source-ranges 10.240.0.0/24,10.200.0.0/16

外部からはSSH,ICMP,HTTPSのみ許可する

$ gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
  --allow tcp:22,tcp:6443,icmp \
  --network kubernetes-the-hard-way \
  --source-ranges 0.0.0.0/0

生成したファイアウォールのリスト

 ~  gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
NAME                                    NETWORK                  DIRECTION  PRIORITY  ALLOW                 DENY  DISABLED
kubernetes-the-hard-way-allow-external  kubernetes-the-hard-way  INGRESS    1000      tcp:22,tcp:6443,icmp        False
kubernetes-the-hard-way-allow-internal  kubernetes-the-hard-way  INGRESS    1000      tcp,udp,icmp                False
goforbrokegoforbroke

外部公開の設定

ロードバランサーにグローバルIPを設定

% gcloud compute addresses create kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region)

# 割り当てられたIPのリスト
% gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
NAME                     ADDRESS/RANGE  TYPE      PURPOSE  NETWORK  REGION           SUBNET  STATUS
kubernetes-the-hard-way  xx.xx.xx.xx   EXTERNAL                    asia-northeast1          RESERVED

コンピュート

k8sの環境をVMから作る.
Controle plan3台,Worker3台の計6台構成.

:::messages
コンピュートを動かす前にRegionとZoneがセットされていること.

Zoneは次のリストの通り.
https://cloud.google.com/compute/docs/regions-zones

%  gcloud config configurations list
NAME     IS_ACTIVE  ACCOUNT              PROJECT             COMPUTE_DEFAULT_ZONE  COMPUTE_DEFAULT_REGION
default  True       your@mail.address  k8s-hardway-xxxxx  asia-northeast1-c     asia-northeast1

:::

Controllers

% for i in 0 1 2; do
  gcloud compute instances create controller-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-2004-lts \
    --image-project ubuntu-os-cloud \
    --machine-type e2-standard-2 \
    --private-network-ip 10.240.0.1${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet kubernetes \
    --tags kubernetes-the-hard-way,controller

Workers

Podに割り当てるCIDRを次のオプションで設定している.
--metadata pod-cidr=10.200.${i}.0/24

% for i in 0 1 2; do
  gcloud compute instances create worker-${i} \
    --async \
    --boot-disk-size 200GB \
    --can-ip-forward \
    --image-family ubuntu-2004-lts \
    --image-project ubuntu-os-cloud \
    --machine-type e2-standard-2 \
    --metadata pod-cidr=10.200.${i}.0/24 \
    --private-network-ip 10.240.0.2${i} \
    --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
    --subnet kubernetes \
    --tags kubernetes-the-hard-way,worker
done

作成したインスタンスの一覧を取得する.
外部公開用のIPも払い出される.
外部からはSSH,ICMP,HTTPSのみ許可する で設定したファイアウォールポリシーに従って動作する.

% gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way"
NAME          ZONE               MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
controller-0  asia-northeast1-c  e2-standard-2               10.240.0.10  xx.xx.xx.xx   RUNNING
controller-1  asia-northeast1-c  e2-standard-2               10.240.0.11  xx.xx.xx.xx    RUNNING
controller-2  asia-northeast1-c  e2-standard-2               10.240.0.12  xx.xx.xx.xx  RUNNING
worker-0      asia-northeast1-c  e2-standard-2               10.240.0.20  xx.xx.xx.xx  RUNNING
worker-1      asia-northeast1-c  e2-standard-2               10.240.0.21  xx.xx.xx.xx    RUNNING
worker-2      asia-northeast1-c  e2-standard-2               10.240.0.22  xx.xx.xx.xx   RUNNING

SSH接続のテスト

接続する時に公開鍵が作られる.

% gcloud compute ssh controller-0
...
Welcome to Ubuntu 20.04.1 LTS (GNU/Linux 5.4.0-1034-gcp x86_64)
goforbrokegoforbroke

Provisioning a CA and Generating TLS Certificates

CloudFlareのPKI toolkitを使って,プライベート用の認証局とTLS証明書を作成する.
日本で動かす環境だが,パラメータは原文通りにする.

% cat ca-config.json
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}

% cat > ca-csr.json <<EOF
{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Oregon"
    }
  ]
}
EOF

% cfssl gencert -initca ca-csr.json | cfssljson -bare ca
% ls
ca-config.json	ca-csr.json	ca-key.pem	ca.csr		ca.pem

Client and Server Certificates

この節からコンポーネント単位にクライアント証明書を作る.

Kubernetes admin ユーザー用にクライアント証明書を作成する.

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:masters",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  admin-csr.json | cfssljson -bare admin

% ls
admin-csr.json	admin.csr	ca-config.json	ca-key.pem	ca.pem
admin-key.pem	admin.pem	ca-csr.json	ca.csr

The Kubelet Client Certificates

Nodeを Node Authorizer 認証モードで動かすために,workerごとにクライアント証明書を作成する.
証明書に設定するパラメータも system:nodes グループ, system:node:<nodeName と決まっている.

% for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
  "CN": "system:node:${instance}",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:nodes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
  --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')

INTERNAL_IP=$(gcloud compute instances describe ${instance} \
  --format 'value(networkInterfaces[0].networkIP)')

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
  -profile=kubernetes \
  ${instance}-csr.json | cfssljson -bare ${instance}
done

% ls
admin-csr.json		ca-key.pem		worker-0.pem		worker-2-key.pem
admin-key.pem		ca.csr			worker-1-csr.json	worker-2.csr
admin.csr		ca.pem			worker-1-key.pem	worker-2.pem
admin.pem		worker-0-csr.json	worker-1.csr
ca-config.json		worker-0-key.pem	worker-1.pem
ca-csr.json		worker-0.csr		worker-2-csr.json

worker-0 のTLS証明書はCAから発行され,O,CNが所定の書式に従っていることが確認できた.

% openssl x509 -text -noout -in worker-0.pem | head -n15
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            4d:92:34:ba:ce:e6:60:80:f8:47:b0:af:70:da:3e:27:cc:b0:bb:3b
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=Oregon, L=Portland, O=Kubernetes, OU=CA, CN=Kubernetes
        Validity
            Not Before: Jan 10 12:13:00 2021 GMT
            Not After : Jan 10 12:13:00 2022 GMT
        Subject: C=US, ST=Oregon, L=Portland, O=system:nodes, OU=Kubernetes The Hard Way, CN=system:node:worker-0
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:

The Controller Manager Client Certificate

cat > kube-controller-manager-csr.json <<EOF
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

% ls
admin-csr.json				worker-0-csr.json
admin-key.pem				worker-0-key.pem
admin.csr				worker-0.csr
admin.pem				worker-0.pem
ca-config.json				worker-1-csr.json
ca-csr.json				worker-1-key.pem
ca-key.pem				worker-1.csr
ca.csr					worker-1.pem
ca.pem					worker-2-csr.json
kube-controller-manager-csr.json	worker-2-key.pem
kube-controller-manager-key.pem		worker-2.csr
kube-controller-manager.csr		worker-2.pem
kube-controller-manager.pem
% openssl x509 -text -noout -in kube-controller-manager.pem | head -n11
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            3f:14:94:b2:bd:a1:a1:d0:fd:09:1e:6b:c6:c9:65:27:37:97:5e:d7
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=Oregon, L=Portland, O=Kubernetes, OU=CA, CN=Kubernetes
        Validity
            Not Before: Jan 10 12:24:00 2021 GMT
            Not After : Jan 10 12:24:00 2022 GMT
        Subject: C=US, ST=Oregon, L=Portland, O=system:kube-controller-manager, OU=Kubernetes The Hard Way, CN=system:kube-controller-manager

The Kube Proxy Client Certificate

kube-proxyのクライアント証明書を作る.

% cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:node-proxier",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-proxy-csr.json | cfssljson -bare kube-proxy

% ls
admin-csr.json				kube-proxy.csr
admin-key.pem				kube-proxy.pem
admin.csr				worker-0-csr.json
admin.pem				worker-0-key.pem
ca-config.json				worker-0.csr
ca-csr.json				worker-0.pem
ca-key.pem				worker-1-csr.json
ca.csr					worker-1-key.pem
ca.pem					worker-1.csr
kube-controller-manager-csr.json	worker-1.pem
kube-controller-manager-key.pem		worker-2-csr.json
kube-controller-manager.csr		worker-2-key.pem
kube-controller-manager.pem		worker-2.csr
kube-proxy-csr.json			worker-2.pem
kube-proxy-key.pem

The Scheduler Client Certificate

Schedukerのクライアント証明書を作る.

cat > kube-scheduler-csr.json <<EOF
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kube-scheduler-csr.json | cfssljson -bare kube-scheduler

% ls
admin-csr.json				kube-scheduler-csr.json
admin-key.pem				kube-scheduler-key.pem
admin.csr				kube-scheduler.csr
admin.pem				kube-scheduler.pem
ca-config.json				worker-0-csr.json
ca-csr.json				worker-0-key.pem
ca-key.pem				worker-0.csr
ca.csr					worker-0.pem
ca.pem					worker-1-csr.json
kube-controller-manager-csr.json	worker-1-key.pem
kube-controller-manager-key.pem		worker-1.csr
kube-controller-manager.csr		worker-1.pem
kube-controller-manager.pem		worker-2-csr.json
kube-proxy-csr.json			worker-2-key.pem
kube-proxy-key.pem			worker-2.csr
kube-proxy.csr				worker-2.pem
kube-proxy.pem

The Kubernetes API Server Certificate

リモートクライアントが検証できるようにSubject Alternative NamesにIPアドレスを加えておく.

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region) \
  --format 'value(address)')

KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes

% ls
admin-csr.json				kube-scheduler.csr
admin-key.pem				kube-scheduler.pem
admin.csr				kubernetes-csr.json
admin.pem				kubernetes-key.pem
ca-config.json				kubernetes.csr
ca-csr.json				kubernetes.pem
ca-key.pem				worker-0-csr.json
ca.csr					worker-0-key.pem
ca.pem					worker-0.csr
kube-controller-manager-csr.json	worker-0.pem
kube-controller-manager-key.pem		worker-1-csr.json
kube-controller-manager.csr		worker-1-key.pem
kube-controller-manager.pem		worker-1.csr
kube-proxy-csr.json			worker-1.pem
kube-proxy-key.pem			worker-2-csr.json
kube-proxy.csr				worker-2-key.pem
kube-proxy.pem				worker-2.csr
kube-scheduler-csr.json			worker-2.pem
kube-scheduler-key.pem

Kubernetes APIサーバーには内部DNS名が割り当てられ,10.32.0.0/24のアドレス範囲にリンクされる.
アドレス範囲はcontrol plane bootstrappingで指定した --service-cluster-ip-range=10.32.0.0/24 の値になる.

詳細は ユーザー所有のIPアドレスを選択する参照.

The Service Account Key Pair

Podで実行するプロセス用にサービスアカウントの鍵ペアを作成する.

cat > service-account-csr.json <<EOF
{
  "CN": "service-accounts",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Kubernetes The Hard Way",
      "ST": "Oregon"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  service-account-csr.json | cfssljson -bare service-account

% ls service-account*
service-account-csr.json        service-account.csr
service-account-key.pem         service-account.pem

Distribute the Client and Server Certificates

秘密鍵と証明書をWorker,Controllerのインスタンスに配布する.

  • Worker
for instance in worker-0 worker-1 worker-2; do
  gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done
  • Controller
for instance in controller-0 controller-1 controller-2; do
  gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
    service-account-key.pem service-account.pem ${instance}:~/
done
goforbrokegoforbroke

Generating Kubernetes Configuration Files for Authentication

kubeconfigs として知られている,Kubernetesクライアントの配置と,Kubernets APIサーバーに対する認証を可能にする.

以下の節でcontroller manager, kubelet, kube-proxy, and scheduler clients admin userのconfigを証明書付きで作成する.

Kubernetes Public IP Address

環境変数に公開IPアドレスを設定しておく.

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region) \
  --format 'value(address)')

The kubelet Kubernetes Configuration File

% for instance in worker-0 worker-1 worker-2; do
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-credentials system:node:${instance} \
    --client-certificate=${instance}.pem \
    --client-key=${instance}-key.pem \
    --embed-certs=true \
    --kubeconfig=${instance}.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:node:${instance} \
    --kubeconfig=${instance}.kubeconfig

  kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done

% ls worker*.kubeconfig
worker-0.kubeconfig	worker-1.kubeconfig	worker-2.kubeconfig

Node Authorizerモードで認証するため,kubeletに証明書をつけてconfigを生成する.

% less worker-0.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: (snip)
    server: https://35.221.79.64:6443
  name: kubernetes-the-hard-way
contexts:
- context:
    cluster: kubernetes-the-hard-way
    user: system:node:worker-0
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: system:node:worker-0
  user:
    client-certificate-data: (snip)

The kube-proxy Kubernetes Configuration File

kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-credentials system:kube-proxy \
    --client-certificate=kube-proxy.pem \
    --client-key=kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

% ls
...
kube-proxy.kubeconfig

The kube-controller-manager Kubernetes Configuration File

kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.pem \
    --client-key=kube-controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig

% ls
...
kube-controller-manager.kubeconfig

The kube-scheduler Kubernetes Configuration File

%  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.pem \
    --client-key=kube-scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig

% ls
...
kube-scheduler.kubeconfig

The admin Kubernetes Configuration File

kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.pem \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=admin.kubeconfig

  kubectl config set-credentials admin \
    --client-certificate=admin.pem \
    --client-key=admin-key.pem \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=admin \
    --kubeconfig=admin.kubeconfig

  kubectl config use-context default --kubeconfig=admin.kubeconfig

% ls
...
admin.kubeconfig

Distribute the Kubernetes Configuration Files

kubeletkube-proxyのkubeconfigをWorkerインスタンスに配布する.

% for instance in worker-0 worker-1 worker-2; do
  gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done

kube-controller-managerkube-schedulerのkubeconfigをControllerインスタンスに配布する.

for instance in controller-0 controller-1 controller-2; do
  gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done
goforbrokegoforbroke

Generating the Data Encryption Config and Key

kubernetesは平文のデータ保存に加えて,暗号化した上でデータ保存をサポートする.
kubernetesがRESTでクラスタの暗号化をサポートする能力を設定する.

複数の暗号方式をサポートする.詳細はencyption config参照.

The Encryption Key

% ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

The Encryption Config File

cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}
EOF

Controllerインスタンスに配布する.

for instance in controller-0 controller-1 controller-2; do
  gcloud compute scp encryption-config.yaml ${instance}:~/
done
goforbrokegoforbroke

Bootstrapping the etcd Cluster

Kubernetesはクラスターの状態をetcdに保存する.
3ノードのetcdクラスタを構築する.

Prerequisites

それぞれのControllerインスタンスに接続して作業する.

Bootstrapping an etcd Cluster Member

Download and Install the etcd Binaries

wget -q --show-progress --https-only --timestamping \
  "https://github.com/etcd-io/etcd/releases/download/v3.4.10/etcd-v3.4.10-linux-amd64.tar.gz"
tar -xvf etcd-v3.4.10-linux-amd64.tar.gz
sudo mv etcd-v3.4.10-linux-amd64/etcd* /usr/local/bin/

Configure the etcd Server

The Kubernetes API Server Certificate で作った秘密鍵とサーバ証明書をetcdのディレクトリに置く.

sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/

インスタンスの内部アドレスはクライアントリクエストの処理etcdクラスターpeer同士の通信に使われる.

INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)

etcdメンバーにユニークな名前をつける.

ETCD_NAME=$(hostname -s)

etcdをsystemd Serviceにする.

cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster controller-0=https://10.240.0.10:2380,controller-1=https://10.240.0.11:2380,controller-2=https://10.240.0.12:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Start the etcd Server

sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd

Verification

etcdクラスターのメンバに加わっていることを確認する.

sudo ETCDCTL_API=3 etcdctl member list \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem
3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379, false
f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379, false
ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379, false
goforbrokegoforbroke

Bootstrapping the Kubernetes Control Plane

Controllerインスタンスに可用性をもたせる.
Kubernetes APIをリモートクライアントに公開する外部ロードバランサを生成する.

Controllerの各ノードにKubernetes API Server, Scheduler, and Controller Managerをインストールする.

Provision the Kubernetes Control Plane

sudo mkdir -p /etc/kubernetes/config

Download and Install the Kubernetes Controller Binaries

wget -q --show-progress --https-only --timestamping \
  "https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-apiserver" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-controller-manager" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-scheduler" \
  "https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl"
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

Configure the Kubernetes API Server

  sudo mkdir -p /var/lib/kubernetes/

  sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
    service-account-key.pem service-account.pem \
    encryption-config.yaml /var/lib/kubernetes/

次から各コンポーネントのSystemd Serviceを作る.

Configure the Kubernetes API Server

内部IPはクラスターのメンバーにAPIサーバを広報するために使う.

INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
  --advertise-address=${INTERNAL_IP} \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/audit.log \\
  --authorization-mode=Node,RBAC \\
  --bind-address=0.0.0.0 \\
  --client-ca-file=/var/lib/kubernetes/ca.pem \\
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  --etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
  --event-ttl=1h \\
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  --kubelet-https=true \\
  --runtime-config='api/all=true' \\
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --service-node-port-range=30000-32767 \\
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Configure the Kubernetes Controller Manager

sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
  --bind-address=0.0.0.0 \\
  --cluster-cidr=10.200.0.0/16 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
  --leader-elect=true \\
  --root-ca-file=/var/lib/kubernetes/ca.pem \\
  --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \\
  --service-cluster-ip-range=10.32.0.0/24 \\
  --use-service-account-credentials=true \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Configure the Kubernetes Scheduler

sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/
cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
clientConnection:
  kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
leaderElection:
  leaderElect: true
EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Start the Controller Services

Controllerのサービスを立ち上げる.

 sudo systemctl daemon-reload
  sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
  sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler

Enable HTTP Health Checks

Google Network Load BalancerはHTTPのヘルスチェックだけサポートするため,HTTPSで公開しているAPIサーバのヘルスチェックはできない.
ワークアラウンドとして,HTTPで立てたヘルスチェック用のWebサーバをControllerの中に用意する.WebサーバーにAPIサーバのアクセスも中継させる.

sudo apt-get update
sudo apt-get install -y nginx
cat > kubernetes.default.svc.cluster.local <<EOF
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;

  location /healthz {
     proxy_pass                    https://127.0.0.1:6443/healthz;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
  }
}
EOF
  sudo mv kubernetes.default.svc.cluster.local \
    /etc/nginx/sites-available/kubernetes.default.svc.cluster.local

  sudo ln -s /etc/nginx/sites-available/kubernetes.default.svc.cluster.local /etc/nginx/sites-enabled/
sudo systemctl restart nginx
sudo systemctl enable nginx

Verification

コンポーネントのステータスを確認する.

$ kubectl get componentstatuses --kubeconfig admin.kubeconfig
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}
etcd-2               Healthy   {"health":"true"}
etcd-1               Healthy   {"health":"true"}
$ curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Mon, 11 Jan 2021 13:09:13 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff

ok

RBAC for Kubelet Authorization

Kubernetes APIサーバが,各workerノードにあるKubelete APIにアクセスできるようにRBACを設定する.
設定する権限はメトリクス,ログ,pod内のコマンド実行.

チュートリアルではKubeletの認証にwebhookモードを使う.

system:kube-apiserver-to-kubelet CkusterRoleを作る.

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
EOF

Kubernetes APIサーバーは--kubelet-client-certificateフラグで定義したクライアント証明書を使う.

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

The Kubernetes Frontend Load Balancer

Kubernetes APIサーバーの前に外部ロードバランサを付ける.

Provision a Network Load Balancer

 KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
    --region $(gcloud config get-value compute/region) \
    --format 'value(address)')

  gcloud compute http-health-checks create kubernetes \
    --description "Kubernetes Health Check" \
    --host "kubernetes.default.svc.cluster.local" \
    --request-path "/healthz"

  gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
    --network kubernetes-the-hard-way \
    --source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
    --allow tcp

  gcloud compute target-pools create kubernetes-target-pool \
    --http-health-check kubernetes

  gcloud compute target-pools add-instances kubernetes-target-pool \
   --instances controller-0,controller-1,controller-2

  gcloud compute forwarding-rules create kubernetes-forwarding-rule \
    --address ${KUBERNETES_PUBLIC_ADDRESS} \
    --ports 6443 \
    --region $(gcloud config get-value compute/region) \
    --target-pool kubernetes-target-pool

Verification

構築するローカル端末から接続する.

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  --region $(gcloud config get-value compute/region) \
  --format 'value(address)')
% curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
{
  "major": "1",
  "minor": "18",
  "gitVersion": "v1.18.6",
  "gitCommit": "dff82dc0de47299ab66c83c626e08b245ab19037",
  "gitTreeState": "clean",
  "buildDate": "2020-07-15T16:51:04Z",
  "goVersion": "go1.13.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}%
goforbrokegoforbroke

Bootstrapping the Kubernetes Worker Nodes

Workerノードのコンポーネントrunc, container networking plugins, containerd, kubelet, and kube-proxyをインストールする.
Workerノードに接続しておく.

gcloud compute ssh worker-0

Provisioning a Kubernetes Worker Node

 sudo apt-get update
sudo apt-get -y install socat conntrack ipset
  • socat - Multipurpose relay (SOcket CAT)
    • TCPのリレー
  • ipset -- administration tool for IP sets
    • IPとポート番号などのセットを管理する
  • conntrack - This module, when combined with connection tracking, allows access to more connection tracking information than the "state" match.
    • パケットの接続状態をトラッキングしている

Disable Swap

Kubernetesクラスターではswapオフが必須.swapオンだとkubeletが起動に失敗する.

sudo swapon --show
sudo swapoff -a

Download and Install Worker Binaries

wget -q --show-progress --https-only --timestamping \
  https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.18.0/crictl-v1.18.0-linux-amd64.tar.gz \
  https://github.com/opencontainers/runc/releases/download/v1.0.0-rc91/runc.amd64 \
  https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz \
  https://github.com/containerd/containerd/releases/download/v1.3.6/containerd-1.3.6-linux-amd64.tar.gz \
  https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubectl \
  https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kube-proxy \
  https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/linux/amd64/kubelet
sudo mkdir -p \
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
  /var/run/kubernetes
  mkdir containerd
  tar -xvf crictl-v1.18.0-linux-amd64.tar.gz
  tar -xvf containerd-1.3.6-linux-amd64.tar.gz -C containerd
  sudo tar -xvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin/
  sudo mv runc.amd64 runc
  chmod +x crictl kubectl kube-proxy kubelet runc 
  sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
  sudo mv containerd/bin/* /bin/

Configure CNI Networking

Container Networking Interface(CNI)を設定する.
CNIはKubernetesが求める,次のネットワーク要件を実現する.

  1. 全てのコンテナは他コンテナとNATなしで通信できること
  2. 全てのノードは全てのコンテナとNATなしで通信できること
  3. コンテナから見える自分のIPアドレスと他から見た場合と同じであること

Podに割り当てるCIDRを取得する.
事前にインスタンスにしておいたmetadataから読み出す.

POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
  http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)

Pod間を繋ぐブリッジを作成する.

cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
    "cniVersion": "0.3.1",
    "name": "bridge",
    "type": "bridge",
    "bridge": "cnio0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "ranges": [
          [{"subnet": "${POD_CIDR}"}]
        ],
        "routes": [{"dst": "0.0.0.0/0"}]
    }
}
EOF

loopbackを設定する.

cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
    "cniVersion": "0.3.1",
    "name": "lo",
    "type": "loopback"
}
EOF

Configure containerd

sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml
[plugins]
  [plugins.cri.containerd]
    snapshotter = "overlayfs"
    [plugins.cri.containerd.default_runtime]
      runtime_type = "io.containerd.runtime.v1.linux"
      runtime_engine = "/usr/local/bin/runc"
      runtime_root = ""
EOF

Systemdでcontainerdサービスを作る

cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF

Configure the Kubelet

  sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
  sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
  sudo mv ca.pem /var/lib/kubernetes/
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: true
  x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
  mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
  - "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF

resolvConfはsystemd-resolvedでサービスディスカバリにCoreDNを使用する時にループを回避するために使う.

Configure the Kubernetes Proxy

sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
  kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Start the Worker Services

  sudo systemctl daemon-reload
  sudo systemctl enable containerd kubelet kube-proxy
  sudo systemctl start containerd kubelet kube-proxy

Verification

Workerノードがクラスタに登録されたことを確認する.

gcloud compute ssh controller-0 \
  --command "kubectl get nodes --kubeconfig admin.kubeconfig"
NAME       STATUS   ROLES    AGE   VERSION
worker-0   Ready    <none>   24s   v1.18.6
worker-1   Ready    <none>   24s   v1.18.6
worker-2   Ready    <none>   24s   v1.18.6
goforbrokegoforbroke

Provisioning Pod Network Routes

ノードにスケジュールされたPodは,PodのCIDR範囲のIPアドレスを受け取る.
この時点ではネットワークルートが欠落しているため,他のPodと通信ができない.

The Routing Table

VPCネットワークでルートを作成するために必要な情報を取得する.

for instance in worker-0 worker-1 worker-2; do
  gcloud compute instances describe ${instance} \
    --format 'value[separator=" "](networkInterfaces[0].networkIP,metadata.items[0].value)'
done
10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24

Routes

Workerインスタンスごとにネットワークルートを作成する.

for i in 0 1 2; do
  gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
    --network kubernetes-the-hard-way \
    --next-hop-address 10.240.0.2${i} \
    --destination-range 10.200.${i}.0/24
done

VPCネットワークのルートを確認する.

gcloud compute routes list --filter "network: kubernetes-the-hard-way"

Node IPがネクストホップになっている.

NAME                            NETWORK                  DEST_RANGE     NEXT_HOP                  PRIORITY
default-route-3f13b2ec7d4f6e45  kubernetes-the-hard-way  0.0.0.0/0      default-internet-gateway  1000
default-route-419089e0c4331c42  kubernetes-the-hard-way  10.240.0.0/24  kubernetes-the-hard-way   0
kubernetes-route-10-200-0-0-24  kubernetes-the-hard-way  10.200.0.0/24  10.240.0.20               1000
kubernetes-route-10-200-1-0-24  kubernetes-the-hard-way  10.200.1.0/24  10.240.0.21               1000
kubernetes-route-10-200-2-0-24  kubernetes-the-hard-way  10.200.2.0/24  10.240.0.22               1000
goforbrokegoforbroke

Deploying the DNS Cluster Add-on

CoreDNSに支援を受けたDNSに基づくサービスディスカバリをDNSアドオンとしてデプロイする.

The DNS Cluster Add-on

corednsクラスターアドオンをデプロイする.
Configuring kubectl for Remote Accessにtkubectlが˚k8s-hardwayクラスターを操作できるようになっている.

kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.7.0.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

kube-dns deploymentでpodが生成されたことを確認する.

kubectl get pods -l k8s-app=kube-dns -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
coredns-5677dc4cdb-f87t6   1/1     Running   0          2m46s
coredns-5677dc4cdb-jq2qh   1/1     Running   0          2m46s

Verification

busyboxのpodを動かす.

kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
kubectl get pods -l run=busybox
NAME      READY   STATUS              RESTARTS   AGE
busybox   0/1     ContainerCreating   0          5s

busyboxのPodでkubernetesサービスを名前解決してみる.

POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
kubectl exec -ti $POD_NAME -- nslookup kubernetes

Configure the Kubernetes API Serverで設定したKubernetesAPIサーバーのIPが返る.

Server:    10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
goforbrokegoforbroke

Smoke Test

Kubernetesクラスタの機能が正しく動くか,テストをする.

Data Encryption

kubectl create secret generic kubernetes-the-hard-way \
  --from-literal="mykey=mydata"
gcloud compute ssh controller-0 \
  --command "sudo ETCDCTL_API=3 etcdctl get \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem\
  /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"

hexdampをみて,aescbcモードで暗号化が動作していることを確認する.

00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
00000010  73 2f 64 65 66 61 75 6c  74 2f 6b 75 62 65 72 6e  |s/default/kubern|
00000020  65 74 65 73 2d 74 68 65  2d 68 61 72 64 2d 77 61  |etes-the-hard-wa|
00000030  79 0a 6b 38 73 3a 65 6e  63 3a 61 65 73 63 62 63  |y.k8s:enc:aescbc|
00000040  3a 76 31 3a 6b 65 79 31  3a 50 9e 6e d7 36 56 4f  |:v1:key1:P.n.6VO|
00000050  98 cc 67 22 dc f1 c0 cc  58 e3 69 41 2d 33 d2 9a  |..g"....X.iA-3..|
00000060  6d 01 32 02 64 75 da 03  ce dc 1c 7a a4 10 64 27  |m.2.du.....z..d'|
00000070  12 02 40 c1 f1 f1 17 72  12 fe 89 33 1e a9 e9 3b  |..@....r...3...;|
00000080  38 cc 35 d6 b1 9d 99 1c  7c e7 06 3f 6a e3 78 53  |8.5.....|..?j.xS|
00000090  5e d5 94 f9 2f 98 fb 87  9f 1d eb f0 04 7e 03 ec  |^.../........~..|
000000a0  6c 60 bd c0 5a c3 e8 69  55 0d 18 19 ce 48 6f 55  |l`..Z..iU....HoU|
000000b0  d1 94 89 40 26 bb a8 fd  95 69 0d 2d 4f 54 00 48  |...@&....i.-OT.H|
000000c0  fc b1 21 b1 99 70 c8 0a  0a a1 dd 2b 8d 75 78 b7  |..!..p.....+.ux.|
000000d0  a5 64 5f 1e 77 1a 69 83  8d 7c 7d e4 db 01 16 fe  |.d_.w.i..|}.....|
000000e0  13 c9 13 2d 9b d4 7c 8d  aa bc ca 13 71 85 c1 5d  |...-..|.....q..]|
000000f0  71 d9 87 bd 4e be 37 64  43 b6 ec 89 29 b9 05 05  |q...N.7dC...)...|
00000100  56 aa 06 30 e4 de 86 55  3b 43 ac c9 08 8b a4 60  |V..0...U;C.....`|
00000110  19 6f 75 fd 58 b7 c3 66  e9 cb f7 2f e6 b0 a8 dc  |.ou.X..f.../....|
00000120  72 de 56 a6 3e 1c f4 d5  7e 7e ee 90 85 db 6c f5  |r.V.>...~~....l.|
00000130  0b be ed 16 7d ec bf 87  d0 e5 31 07 56 d3 de 17  |....}.....1.V...|
00000140  db 3f 49 bf 7b 2f bc 29  aa fd 06 26 86 a4 f9 8d  |.?I.{/.)...&....|
00000150  78 8b 88 20 7e 1c 1b 99  a0 0a                    |x.. ~.....|
0000015a

Deployments

Podを作ってみる.

kubectl create deployment nginx --image=nginx
kubectl get pods -l app=nginx
NAME                    READY   STATUS              RESTARTS   AGE
nginx-f89759699-mhs69   0/1     ContainerCreating   0          5s

Port Forwarding

クライアントからPod内にポートフォワードを確認する.

POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward $POD_NAME 8080:80

別のターミナルからアクセスする.

curl --head http://127.0.0.1:8080
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Tue, 12 Jan 2021 13:09:03 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 15 Dec 2020 13:59:38 GMT
Connection: keep-alive
ETag: "5fd8c14a-264"
Accept-Ranges: bytes

ポートフォワードを停止する.

Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
^C%



goforbrokegoforbroke

Smoke Test

Kubernetesクラスタの機能が正しく動くか,テストをする.

Data Encryption

kubectl create secret generic kubernetes-the-hard-way \
  --from-literal="mykey=mydata"
gcloud compute ssh controller-0 \
  --command "sudo ETCDCTL_API=3 etcdctl get \
  --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/etcd/ca.pem \
  --cert=/etc/etcd/kubernetes.pem \
  --key=/etc/etcd/kubernetes-key.pem\
  /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"

hexdampをみて,aescbcモードで暗号化が動作していることを確認する.

00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
00000010  73 2f 64 65 66 61 75 6c  74 2f 6b 75 62 65 72 6e  |s/default/kubern|
00000020  65 74 65 73 2d 74 68 65  2d 68 61 72 64 2d 77 61  |etes-the-hard-wa|
00000030  79 0a 6b 38 73 3a 65 6e  63 3a 61 65 73 63 62 63  |y.k8s:enc:aescbc|
00000040  3a 76 31 3a 6b 65 79 31  3a 50 9e 6e d7 36 56 4f  |:v1:key1:P.n.6VO|
00000050  98 cc 67 22 dc f1 c0 cc  58 e3 69 41 2d 33 d2 9a  |..g"....X.iA-3..|
00000060  6d 01 32 02 64 75 da 03  ce dc 1c 7a a4 10 64 27  |m.2.du.....z..d'|
00000070  12 02 40 c1 f1 f1 17 72  12 fe 89 33 1e a9 e9 3b  |..@....r...3...;|
00000080  38 cc 35 d6 b1 9d 99 1c  7c e7 06 3f 6a e3 78 53  |8.5.....|..?j.xS|
00000090  5e d5 94 f9 2f 98 fb 87  9f 1d eb f0 04 7e 03 ec  |^.../........~..|
000000a0  6c 60 bd c0 5a c3 e8 69  55 0d 18 19 ce 48 6f 55  |l`..Z..iU....HoU|
000000b0  d1 94 89 40 26 bb a8 fd  95 69 0d 2d 4f 54 00 48  |...@&....i.-OT.H|
000000c0  fc b1 21 b1 99 70 c8 0a  0a a1 dd 2b 8d 75 78 b7  |..!..p.....+.ux.|
000000d0  a5 64 5f 1e 77 1a 69 83  8d 7c 7d e4 db 01 16 fe  |.d_.w.i..|}.....|
000000e0  13 c9 13 2d 9b d4 7c 8d  aa bc ca 13 71 85 c1 5d  |...-..|.....q..]|
000000f0  71 d9 87 bd 4e be 37 64  43 b6 ec 89 29 b9 05 05  |q...N.7dC...)...|
00000100  56 aa 06 30 e4 de 86 55  3b 43 ac c9 08 8b a4 60  |V..0...U;C.....`|
00000110  19 6f 75 fd 58 b7 c3 66  e9 cb f7 2f e6 b0 a8 dc  |.ou.X..f.../....|
00000120  72 de 56 a6 3e 1c f4 d5  7e 7e ee 90 85 db 6c f5  |r.V.>...~~....l.|
00000130  0b be ed 16 7d ec bf 87  d0 e5 31 07 56 d3 de 17  |....}.....1.V...|
00000140  db 3f 49 bf 7b 2f bc 29  aa fd 06 26 86 a4 f9 8d  |.?I.{/.)...&....|
00000150  78 8b 88 20 7e 1c 1b 99  a0 0a                    |x.. ~.....|
0000015a

Deployments

Podを作ってみる.

kubectl create deployment nginx --image=nginx
kubectl get pods -l app=nginx
NAME                    READY   STATUS              RESTARTS   AGE
nginx-f89759699-mhs69   0/1     ContainerCreating   0          5s

Port Forwarding

クライアントからPod内にポートフォワードを確認する.

POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward $POD_NAME 8080:80

別のターミナルからアクセスする.

curl --head http://127.0.0.1:8080
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Tue, 12 Jan 2021 13:09:03 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 15 Dec 2020 13:59:38 GMT
Connection: keep-alive
ETag: "5fd8c14a-264"
Accept-Ranges: bytes

ポートフォワードを停止する.

Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
Handling connection for 8080
^C%

Logs

コンテナのログを取得する.

kubectl logs $POD_NAME
...
127.0.0.1 - - [12/Jan/2021:13:09:03 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.1" "-"

Exec

kubectl exec -ti $POD_NAME -- nginx -v
nginx version: nginx/1.19.6

Services

Serviceを外部公開して,リモートアクセスから接続してみる.

kubectl expose deployment nginx --port 80 --type NodePort
NODE_PORT=$(kubectl get svc nginx \
  --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
  --allow=tcp:${NODE_PORT} \
  --network kubernetes-the-hard-way
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
  --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')
curl -I http://${EXTERNAL_IP}:${NODE_PORT}
HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Sat, 18 Jul 2020 07:16:41 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 07 Jul 2020 15:52:25 GMT
Connection: keep-alive
ETag: "5f049a39-264"
Accept-Ranges: bytes
goforbrokegoforbroke

Cleaning Up

構築したKubernetesクラスターを全て消す.

Compute Instances

gcloud -q compute instances delete \
  controller-0 controller-1 controller-2 \
  worker-0 worker-1 worker-2 \
  --zone $(gcloud config get-value compute/zone)

Networking

  gcloud -q compute forwarding-rules delete kubernetes-forwarding-rule \
    --region $(gcloud config get-value compute/region)

  gcloud -q compute target-pools delete kubernetes-target-pool

  gcloud -q compute http-health-checks delete kubernetes

  gcloud -q compute addresses delete kubernetes-the-hard-way
gcloud -q compute firewall-rules delete \
  kubernetes-the-hard-way-allow-nginx-service \
  kubernetes-the-hard-way-allow-internal \
  kubernetes-the-hard-way-allow-external \
  kubernetes-the-hard-way-allow-health-check
  gcloud -q compute routes delete \
    kubernetes-route-10-200-0-0-24 \
    kubernetes-route-10-200-1-0-24 \
    kubernetes-route-10-200-2-0-24

  gcloud -q compute networks subnets delete kubernetes

  gcloud -q compute networks delete kubernetes-the-hard-way