Open16

kube-proxy

bells17bells17

エントリーポイント

Run

runLoop

ProxyServer.Run

Proxierの中身

bells17bells17

s.Proxier.SyncLoop

proxier.syncRunner.Loop

proxier.syncRunner = async.NewBoundedFrequencyRunner("sync-runner", proxier.syncProxyRules, minSyncPeriod, time.Hour, burstSyncs)

syncProxyRules

前提

  • proxier.needFullSync が true か false で挙動が異なるが、proxier.needFullSync をベースとして読む(proxier.needFullSync: true で読んだ方が網羅しやすそうなので)
  • LocalhostNodePorts(--iptables-localhost-nodeports)が true (デフォルト値)
  • MasqueradeAllfalse (デフォルト値)
  • LocalDetectorは noOpLocalDetector (デフォルト値)
  • --nodeport-addresses はデフォルト値の []string{}

周辺情報

LocalhostNodePorts: true時のsysctl

proxier.serviceChanges

まだsyncProxyRulesしてない変更のあるServiceのデータ管理を行っている

今回のケースだと proxier.serviceChanges.processServiceMapChange は nil となるよう

proxier.endpointsChanges

proxier.serviceChangesのEndpointSlice版
データ構造は若干違うけど、まだsyncProxyRulesしてない変更のあるEndpointSliceのデータ管理をしてるのは一緒
(なのでここではリンクだけ貼っておく)

今回のケースだと proxier.endpointsChanges.processServiceMapChange も同様に nil となるよう

EndpointInfoの生成処理

ServiceInfoの生成処理

中身

  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L766-L770 - Serviceなどの各種データの同期完了まで待機する
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L788-L792 - proxier.serviceChangesとproxier.endpointsChangesから変更があったデータを取得
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L793-L794 - proxier.serviceChangesとproxier.endpointsChangesから変更があったデータを取得してそれぞれproxier.svcPortMapとproxier.endpointsMapに追加する
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L798-L811 - 処理失敗時の再実行処理
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L813-L840 - 以下 proxier.needFullSync: true だった際の処理(処理内容の概要としては各チェーンの作成とジャンプ設定だけを行ってる感じ)
    • コメントの中身
    Ensure that our jump rules (eg from PREROUTING to KUBE-SERVICES) exist.
    We can't do this as part of the iptables-restore because we don't want to specify/replace *all* of the rules in PREROUTING, etc.
    
    We need to create these rules when kube-proxy first starts, and we need to recreate them if the utiliptables Monitor detects that iptables has been flushed.
    In both of those cases, the code will force a full sync.
    In all other cases, it ought to be safe to assume that the rules already exist, so we'll skip this step when doing a partial sync, to save us from having to invoke /sbin/iptables 20 times on each sync(which will be very slow on hosts with lots of iptables rules).
    
    --- 日本語訳 ---
    ジャンプルール(例えばPEROUTINGからKUBE-SERVICESへ)が存在することを確認する。
    iptables-restoreの一部としてこれを行うことはできません。なぜなら、PEROUTINGなどのルールを*すべて*指定/置換したくないからです。
    
    これらのルールは、kube-proxyが最初に起動したときに作成する必要があり、utiliptables Monitorがiptablesがフラッシュされたことを検出した場合に再作成する必要があります。
    どちらの場合も、コードは完全な同期を強制する。
    他のすべてのケースでは、ルールはすでに存在していると仮定して安全であるはずなので、部分的な同期を行うときはこのステップをスキップして、各同期で/sbin/iptablesを20回起動する手間を省きます。 (iptablesルールがたくさんあるホストでは非常に遅くなる)。
    
    • 設定内容としてはChainの作成とRuleの作成
    var iptablesJumpChains = []iptablesJumpChain{
      {utiliptables.TableFilter, kubeExternalServicesChain, utiliptables.ChainInput, "kubernetes externally- visible service portals", []string{"-m", "conntrack", "--ctstate", "NEW"}},
      {utiliptables.TableFilter, kubeExternalServicesChain, utiliptables.ChainForward, "kubernetes externally-visible service portals", []string{"-m", "conntrack", "--ctstate", "NEW"}},
      {utiliptables.TableFilter, kubeNodePortsChain, utiliptables.ChainInput, "kubernetes health check service ports", nil},
      {utiliptables.TableFilter, kubeServicesChain, utiliptables.ChainForward, "kubernetes service portals", []string{"-m", "conntrack", "--ctstate", "NEW"}},
      {utiliptables.TableFilter, kubeServicesChain, utiliptables.ChainOutput, "kubernetes service portals", []string{"-m", "conntrack", "--ctstate", "NEW"}},
      {utiliptables.TableFilter, kubeForwardChain, utiliptables.ChainForward, "kubernetes forwarding rules", nil},
      {utiliptables.TableFilter, kubeProxyFirewallChain, utiliptables.ChainInput, "kubernetes load balancer firewall", []string{"-m", "conntrack", "--ctstate", "NEW"}},
      {utiliptables.TableFilter, kubeProxyFirewallChain, utiliptables.ChainOutput, "kubernetes load balancer firewall", []string{"-m", "conntrack", "--ctstate", "NEW"}},
      {utiliptables.TableFilter, kubeProxyFirewallChain, utiliptables.ChainForward, "kubernetes load balancer firewall", []string{"-m", "conntrack", "--ctstate", "NEW"}},
      {utiliptables.TableNAT, kubeServicesChain, utiliptables.ChainOutput, "kubernetes service portals", nil},
      {utiliptables.TableNAT, kubeServicesChain, utiliptables.ChainPrerouting, "kubernetes service portals", nil},
      {utiliptables.TableNAT, kubePostroutingChain, utiliptables.ChainPostrouting, "kubernetes postrouting rules", nil},
    }
    
    var iptablesKubeletJumpChains = []iptablesJumpChain{
      {utiliptables.TableFilter, kubeletFirewallChain, utiliptables.ChainInput, "", nil},
      {utiliptables.TableFilter, kubeletFirewallChain, utiliptables.ChainOutput, "", nil},
    }
    
    • ここで設定されるChainとRuleはそれぞれ下記のような感じ
      • Chain作成コマンド
        • コマンドはこんな感じ
        iptables -w 5 -W 100000 -N "KUBE-EXTERNAL-SERVICES" -t filter
        
      • Rule設定時のチェックコマンドはこんな感じ(forward先が間違ってる場合はステータスコード2になるが、コメントの場合は1がエラーコードになるっぽい)
        root@kind-control-plane:/# iptables -w 5 -W 100000 -C INPUT -t filter -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j "KUBE-EXTERNAL-SERVICES"
        KUBE-EXTERNAL-SERVICES  all opt -- in * out *  0.0.0.0/0  -> 0.0.0.0/0   ctstate NEW /* kubernetes externally-visible service portals */
        root@kind-control-plane:/# echo $?
        0
        
        root@kind-control-plane:/# iptables -w 5 -W 100000 -C INPUT -t filter -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j "KUBE-EXTERNAL-SERVICES-notfoundchain"
        iptables v1.8.7 (nf_tables): Invalid target name `KUBE-EXTERNAL-SERVICES-notfoundchain' (28 chars max)
        Try `iptables -h' or 'iptables --help' for more information.
        root@kind-control-plane:/# echo $?
        2
        
        root@kind-control-plane:/# iptables -w 5 -W 100000 -C INPUT -t filter -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals not found comment" -j "KUBE-EXTERNAL-SERVICES"
        iptables: Bad rule (does a matching rule exist in that chain?).
        root@kind-control-plane:/# echo $?
        1
        
      • Rule作成コマンド
        • コマンドはこんな感じ(write系の処理なので今は試してない)
        iptables -w 5 -W 100000 -I INPUT -t filter -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j "KUBE-EXTERNAL-SERVICES"
        
    • 共通で行うChainの作成処理は "KUBE-EXTERNAL-SERVICES" -t filter など同じChainの作成を何度も行うことになるが、すでにChainがある場合のエラーハンドリングが入ってるので問題ないのではないかと思われる(iptablesJumpChainなどの1つ目がテーブル、2つ目がChain名となる)
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L846-L851 - ここでkube-proxy内で持つiptablesのfilterとnatのルールデータを初期化する
    • この後の処理を軽く眺めた感じ proxier.filterChains などの変数にルールを保持して、最後にデータをファイルに書き込むっぽいので、そのためのデータの初期化を行ってるよう
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L856-L862
    • filterとnatテーブルの各チェーンに:<chain name> - [0:0] を設定してる
    • これは :<chain name> ACCEPT [0:0] のように書くこともできるようで、対象チェーンに来たパケットを全て許可する & カウンタの初期化を行っているよう(カウンタの初期化は統計データの初期化を行っているようなもので、動作的な変更があるわけではないよう)
    • 設定されるテーブルとチェーンは以下
      • filter: "KUBE-SERVICES", "KUBE-EXTERNAL-SERVICES", "KUBE-FORWARD", "KUBE-NODEPORTS", "KUBE-PROXY-FIREWALL"
      • nat: "KUBE-SERVICES", "KUBE-NODEPORTS", "KUBE-POSTROUTING", "KUBE-MARK-MASQ"
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L864C5-L886
    • POSTROUTINGルールの設定を行ってるよう
    • 設定してるルールは下記のよう
      • -A "KUBE-POSTROUTING" -m mark ! --mark "0x4000/0x4000" -j RETURN
        • KUBE-POSTROUTINGチェーンにおいて、マークされていない(特定のビットがセットされていない)パケットに対しては、以降のルールを適用せずに処理を終了させる(KUBE-POSTROUTINGチェーンを終了させて呼び出し元のチェーンに戻す)
        • 0x4000 は MasqueradeBitがデフォルトの14だった場合の値
      • -A "KUBE-POSTROUTING" -j MARK --xor-mark 0x4000
        • --xor-mark 0x4000オプションによってパケットのマークをクリアする(0x4000のビットを0に戻す)
        • その前のチェーンでこのルールに到達するパケットは、マークが0x4000であることが確定してるので確実にパケットのマークが解除される
        • ということは恐らくどこかでマークしてる箇所が別にあるということだと思われる
      • -A "KUBE-POSTROUTING" -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
        • このルールを通るパケットにSNATを適用する(MASQUERADEターゲットによって送信元アドレスを動的に変更して、トラフィックがクラスタ外に出る際に外部ネットワークに適切にルーティングされるようにする)
        • --random-fully: MASQUERADEアクションが適用される際に、ポートの選択を完全にランダムにするオプション
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L888-L894
    • -A "KUBE-MARK-MASQ" -j MARK --or-mark 0x4000
    • パケットへのマークはここで行っているよう
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L896-L918 - ipv4のデフォルト設定時の処理となる
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L929-L935
    • (EndpointSlice内に設定された)Endpointの総数が1000(largeClusterEndpointsThreshold)を超えたら largeClusterMode として振る舞う
  • https://github.com/kubernetes/kubernetes/blob/v1.28.2/pkg/proxy/iptables/proxier.go#L942-L1381 - 以下 proxier.svcPortMap の for range ループ(400行ほどある)

これ以降の処理

長くなって入らなくなったのでこっちに移した
https://zenn.dev/bells17/scraps/5e41da598a8266#comment-728960e6bba0b9

bells17bells17

go ipt.Monitor

中身の要約

iptablesの

  • mangle
  • nat
  • filter

の各テーブルに "KUBE-PROXY-CANARY" というチェインを作成して、一定間隔毎に作成した各チェインが存在するかをチェックを行い続けるという処理を行う。

もしチェインがなかったり、iptablesのチェインの取得コマンドがエラーになった際には、引数に渡された reloadFunc を実行して再度"KUBE-PROXY-CANARY" チェインの作成~監視を行うという処理を行う。

つまり、"KUBE-PROXY-CANARY"を各テーブルに作成するのがお仕事。

reloadFuncの中身はproxier.forceSyncProxyRulesとなっていて、その中では proxier.syncProxyRules を実行するという処理になっている。

実行する iptables コマンドについて

iptables コマンドの実行についてはこちらのような感じのrunnerを経由して実行される。

最終的に run ~ runContext を通して実行される。

実行されるコマンドと引数については下記のように分類される

タイプ コマンド・引数例 説明 リンク
コマンド iptables or ip6tables ipv6か否かに応じて実行されるコマンドが決定される リンク
共通引数 -w -w 5 -w 5 -W 100000 の3パターン 実行されるiptablesコマンドバージョン共通の引数 リンク
オペレーション -S-N などのオペレーション用途の引数 実行するオペレーションに応じた引数が設定される Ensureの場合
テーブル・チェイン指定引数 <chain> -T <table> オペレーション別にそれぞれ設定される引数 リンク

チェインを作る際のコマンド例

  • ipv4
  • iptablesバージョンが1.6.1以上
  • mangleテーブルに対するチェインの作成

という前提の場合下記のようになる

iptables -w 5 -W 100000 -N "KUBE-PROXY-CANARY" -t mangle

コードはこちら

チェインがあるかを確認する際のコマンド例

  • ipv4
  • iptablesバージョンが1.6.1以上
  • mangleテーブルに対するチェインの作成

という前提の場合下記のようになる

iptables -w 5 -W 100000 -S "KUBE-PROXY-CANARY" -t mangle

コードはこちら

チェインの確認はではエラーの有無を通してチェックを行っている

func (runner *runner) ChainExists(table Table, chain Chain) (bool, error) {
	fullArgs := makeFullArgs(table, chain)

	runner.mu.Lock()
	defer runner.mu.Unlock()

	trace := utiltrace.New("iptables ChainExists")
	defer trace.LogIfLong(2 * time.Second)

	_, err := runner.run(opListChain, fullArgs)
	return err == nil, err
}

↑の挙動を確認するための実験

root@kind-control-plane:/# $ iptables -w 5 -W 100000 -S "KUBE-PROXY-CANARY" -t mangle
-N KUBE-PROXY-CANARY
$ root@kind-control-plane:/# echo $?
0
root@kind-control-plane:/# iptables -w 5 -W 100000 -S "KUBE-PROXY-CANARY-test" -t mangle
iptables v1.8.7 (nf_tables): chain `KUBE-PROXY-CANARY-test' in table `mangle' is incompatible, use 'nft' tool.

root@kind-control-plane:/# echo $?
1

↑ということで存在しないチェインを指定するとエラーが発生するのが確認できた

bells17bells17

serviceConfig.Run

OnServiceSynced

OnServiceAdd

OnServiceDelete

OnServiceUpdate

bells17bells17

endpointSliceConfig.Run

serviceConfig.RunのEndpointSlice版

OnEndpointSliceAdd

OnEndpointSliceDelete

OnEndpointSliceUpdate

OnEndpointSlicesSynced

bells17bells17

nodeConfig.Run

serviceConfig.RunのNode版

OnNodeAdd

OnNodeUpdate

OnNodeDelete

OnNodeSynced

何もしてない

bells17bells17

kindでの実験

kube-proxyのyamlの確認

$ kubectl -n kube-system get ds kube-proxy -o yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "1"
  creationTimestamp: "2023-10-18T07:10:37Z"
  generation: 1
  labels:
    k8s-app: kube-proxy
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "395"
  uid: 36ff9437-e37c-4868-861a-a39fdbf232bc
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kube-proxy
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: kube-proxy
    spec:
      containers:
      - command:
        - /usr/local/bin/kube-proxy
        - --config=/var/lib/kube-proxy/config.conf
        - --hostname-override=$(NODE_NAME)
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: registry.k8s.io/kube-proxy:v1.27.1
        imagePullPolicy: IfNotPresent
        name: kube-proxy
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/kube-proxy
          name: kube-proxy
        - mountPath: /run/xtables.lock
          name: xtables-lock
        - mountPath: /lib/modules
          name: lib-modules
          readOnly: true
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kube-proxy
      serviceAccountName: kube-proxy
      terminationGracePeriodSeconds: 30
      tolerations:
      - operator: Exists
      volumes:
      - configMap:
          defaultMode: 420
          name: kube-proxy
        name: kube-proxy
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock
      - hostPath:
          path: /lib/modules
          type: ""
        name: lib-modules
  updateStrategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 1
  desiredNumberScheduled: 1
  numberAvailable: 1
  numberMisscheduled: 0
  numberReady: 1
  observedGeneration: 1
  updatedNumberScheduled: 1

動作レベルでは確認してないけど↓あたりの設定でホスト側のiptablesの操作を行っているっぽい
(CSI Driverとかと同じ)

privileged: true
hostNetwork: true

ServiceAccountの確認

kube-proxyには下記のruleだけが割り当たっているように見える
(別名で割り当ててるroleやclusterroleがある可能性もあり)

rules:
- apiGroups:
  - ""
  resourceNames:
  - kube-proxy
  resources:
  - configmaps
  verbs:
  - get

というか↓が何かわからん

resourceNames:
  - kube-proxy
$ kubectl -n kube-system get sa kube-proxy -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2023-10-18T07:10:38Z"
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "226"
  uid: 42a081ee-48bf-4b9a-9323-fa34303ee837
$ kubectl -n kube-system get role kube-proxy -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: "2023-10-18T07:10:38Z"
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "231"
  uid: 1b55bcfe-10a7-44d0-8ebf-f8cad170d19b
rules:
- apiGroups:
  - ""
  resourceNames:
  - kube-proxy
  resources:
  - configmaps
  verbs:
  - get
$ kubectl -n kube-system get clusterrole kube-proxy -o yaml
Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "kube-proxy" not found
bells17bells17

kube-proxyが設定するチェーン一覧

テーブル チェーン ジャンプ元 説明
filter "KUBE-EXTERNAL-SERVICES" INPUT, FORWARD
filter "KUBE-NODEPORTS" INPUT
filter "KUBE-SERVICES" FORWARD, OUTPUT
filter "KUBE-FORWARD" FORWARD
filter "KUBE-PROXY-FIREWALL" INPUT, FORWARD, OUTPUT
filter "KUBE-FIREWALL" INPUT, OUTPUT
filter "KUBE-PROXY-CANARY" ?
nat "KUBE-SERVICES" OUTPUT, PREROUTING
nat "KUBE-POSTROUTING" POSTROUTING
nat "KUBE-MARK-MASQ" ?
nat "KUBE-PROXY-CANARY" ?
mangle "KUBE-PROXY-CANARY" ?
nat "KUBE-SEP-<hash値16文字(servicePortName + protocol + endpointをベースにしてる)>" ?
nat "KUBE-EXT-<servicePortName + protocolのハッシュ値16文字>" ?
nat "KUBE-SVC-<servicePortName + protocolのハッシュ値16文字>" ?
nat "KUBE-SVL-<servicePortName + protocolのハッシュ値16文字>" ?
filter "KUBE-FW-<servicePortName + protocolのハッシュ値16文字>" ?
bells17bells17

これの前のコメント
https://zenn.dev/bells17/scraps/5e41da598a8266#comment-a070557f8a1596

syncProxyRulesのproxier.svcPortMapのforループの中身

bells17bells17

kindによる実験

kind

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

起動: kind create cluster --name kube-proxy-example --config kind-config.yaml

サンプルのService

apiVersion: v1
kind: Service
metadata:
  name: cip-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
  type: ClusterIP

---
apiVersion: v1
kind: Service
metadata:
  name: np-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
  type: NodePort

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - nginx
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

apply後の確認

$ kubectl get svc np-service -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"np-service","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":8080}],"selector":{"app":"nginx"},"type":"NodePort"}}
  creationTimestamp: "2023-11-07T16:20:31Z"
  name: np-service
  namespace: default
  resourceVersion: "766"
  uid: 98ca6ada-3bb5-4b91-b46d-a84df41d0d27
spec:
  clusterIP: 10.96.191.124
  clusterIPs:
  - 10.96.191.124
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - nodePort: 31786
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: nginx
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}


$ kubectl get endpointslice np-service-72gzs -o yaml
addressType: IPv4
apiVersion: discovery.k8s.io/v1
endpoints:
- addresses:
  - 10.244.2.3
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: kube-proxy-example-worker2
  targetRef:
    kind: Pod
    name: nginx-6c6656d8f6-6zv7t
    namespace: default
    uid: 4ad3a7cc-3144-42fd-93e4-160445ebcc3b
- addresses:
  - 10.244.1.3
  conditions:
    ready: true
    serving: true
    terminating: false
  nodeName: kube-proxy-example-worker
  targetRef:
    kind: Pod
    name: nginx-6c6656d8f6-j8nkm
    namespace: default
    uid: 00329021-b489-4a94-9511-1932069de5ec
kind: EndpointSlice
metadata:
  annotations:
    endpoints.kubernetes.io/last-change-trigger-time: "2023-11-07T16:51:06Z"
  creationTimestamp: "2023-11-07T16:20:31Z"
  generateName: np-service-
  generation: 9
  labels:
    endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
    kubernetes.io/service-name: np-service
  name: np-service-72gzs
  namespace: default
  ownerReferences:
  - apiVersion: v1
    blockOwnerDeletion: true
    controller: true
    kind: Service
    name: np-service
    uid: 98ca6ada-3bb5-4b91-b46d-a84df41d0d27
  resourceVersion: "3705"
  uid: e8b8e3e1-5bb2-4e25-b24a-965ace8eb27b
ports:
- name: ""
  port: 8080
  protocol: TCP

確認用データ

$ kubectl get svc np-service
NAME         TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
np-service   NodePort   10.96.191.124   <none>        80:31786/TCP   36m

$ kubectl get endpointslice np-service-72gzs
NAME               ADDRESSTYPE   PORTS   ENDPOINTS               AGE
np-service-72gzs   IPv4          8080    10.244.2.3,10.244.1.3   36m

$ kubectl get pod -l app=nginx -o yaml | egrep "ip|nodeName"
    nodeName: kube-proxy-example-worker2
    - ip: 10.244.2.3
    nodeName: kube-proxy-example-worker
    - ip: 10.244.1.3

$ kubectl get node -o wide
NAME                               STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                       CONTAINER-RUNTIME
kube-proxy-example-control-plane   Ready    control-plane   39m   v1.27.1   192.168.228.3   <none>        Debian GNU/Linux 11 (bullseye)   6.5.7-orbstack-00109-gd8500ae6683d   containerd://1.6.21
kube-proxy-example-worker          Ready    <none>          39m   v1.27.1   192.168.228.5   <none>        Debian GNU/Linux 11 (bullseye)   6.5.7-orbstack-00109-gd8500ae6683d   containerd://1.6.21
kube-proxy-example-worker2         Ready    <none>          39m   v1.27.1   192.168.228.4   <none>        Debian GNU/Linux 11 (bullseye)   6.5.7-orbstack-00109-gd8500ae6683d   containerd://1.6.21

$ docker exec -it 8892dd140b3e /bin/bash
root@kind-control-plane:/# curl -I 192.168.228.4:31786 # NodeのIPとNodePort
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Wed, 08 Nov 2023 09:32:46 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 24 Oct 2023 13:46:47 GMT
Connection: keep-alive
ETag: "6537cac7-267"
Accept-Ranges: bytes
# 接続できる
root@kind-control-plane:/# curl -I 10.96.191.124:31786 # np-serviceのClusterIPとNodePort
^C # 接続できない
root@kind-control-plane:/# curl -I 10.96.191.124  # np-serviceのClusterIP
^C # 接続できない
root@kind-control-plane:/# curl -I 10.96.220.188 # cip-serviceのClusterIP
^C


$ ip route show
default via 192.168.228.1 dev eth0 
10.244.0.0/24 via 192.168.228.3 dev eth0 
10.244.1.0/24 via 192.168.228.5 dev eth0 
10.244.2.3 dev veth1ba6bf71 scope host 
192.168.228.0/24 dev eth0 proto kernel scope link src 192.168.228.4

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
4: veth1ba6bf71@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 5a:5f:82:b2:29:9a brd ff:ff:ff:ff:ff:ff link-netns cni-069ee48c-2692-7d63-a235-d1d4b4cfc156
    inet 10.244.2.1/32 scope global veth1ba6bf71
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:c0:a8:e4:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.228.4/24 brd 192.168.228.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fc00:f853:ccd:e793::4/64 scope global nodad 
       valid_lft forever preferred_lft forever
    inet6 fe80::42:c0ff:fea8:e404/64 scope link 
       valid_lft forever preferred_lft forever

$ kubectl -n kube-system get ds kube-proxy -o yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  annotations:
    deprecated.daemonset.template.generation: "2"
  creationTimestamp: "2023-11-07T16:17:57Z"
  generation: 2
  labels:
    k8s-app: kube-proxy
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "1514"
  uid: 2e532402-d4d8-4564-9abb-a3299e26927b
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kube-proxy
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: kube-proxy
    spec:
      containers:
      - command:
        - /usr/local/bin/kube-proxy
        - --config=/var/lib/kube-proxy/config.conf
        - --hostname-override=$(NODE_NAME)
        - -v
        - "4"
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: spec.nodeName
        image: registry.k8s.io/kube-proxy:v1.27.1
        imagePullPolicy: IfNotPresent
        name: kube-proxy
        resources: {}
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/kube-proxy
          name: kube-proxy
        - mountPath: /run/xtables.lock
          name: xtables-lock
        - mountPath: /lib/modules
          name: lib-modules
          readOnly: true
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: kube-proxy
      serviceAccountName: kube-proxy
      terminationGracePeriodSeconds: 30
      tolerations:
      - operator: Exists
      volumes:
      - configMap:
          defaultMode: 420
          name: kube-proxy
        name: kube-proxy
      - hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
        name: xtables-lock
      - hostPath:
          path: /lib/modules
          type: ""
        name: lib-modules
  updateStrategy:
    rollingUpdate:
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 3
  desiredNumberScheduled: 3
  numberAvailable: 3
  numberMisscheduled: 0
  numberReady: 3
  observedGeneration: 2
  updatedNumberScheduled: 3

$ kubectl -n kube-system get cm kube-proxy -o yaml
apiVersion: v1
data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.244.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: 0
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocal:
      bridgeInterface: ""
      interfaceNamePrefix: ""
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      localhostNodePorts: null
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 1s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: iptables
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    winkernel:
      enableDSR: false
      forwardHealthCheckVip: false
      networkName: ""
      rootHnsEndpointName: ""
      sourceVip: ""
  kubeconfig.conf: |-
    apiVersion: v1
    kind: Config
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://kube-proxy-example-control-plane:6443
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
kind: ConfigMap
metadata:
  creationTimestamp: "2023-11-07T16:17:57Z"
  labels:
    app: kube-proxy
  name: kube-proxy
  namespace: kube-system
  resourceVersion: "260"
  uid: 7b8292b7-8a37-4003-94a1-d371471bdc75
bells17bells17

https://zenn.dev/bells17/scraps/5e41da598a8266#comment-57a7a1c6d42732
の続き

kube-proxy-example-worker2 の iptables(filter)

$ iptables -t filter -L -n -v
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   36  2870 KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes load balancer firewall */
 6219   73M KUBE-NODEPORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes health check service ports */
   36  2870 KUBE-EXTERNAL-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes externally-visible service portals */
 6439   73M KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes load balancer firewall */
    0     0 KUBE-FORWARD  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */
    0     0 KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
    0     0 KUBE-EXTERNAL-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes externally-visible service portals */

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   76  5748 KUBE-PROXY-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes load balancer firewall */
   76  5748 KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate NEW /* kubernetes service portals */
 5916  676K KUBE-FIREWALL  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain KUBE-EXTERNAL-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  *      *      !127.0.0.0/8          127.0.0.0/8          /* block incoming localnet connections */ ! ctstate RELATED,ESTABLISHED,DNAT

Chain KUBE-FORWARD (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate INVALID
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding rules */ mark match 0x4000/0x4000
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes forwarding conntrack rule */ ctstate RELATED,ESTABLISHED

Chain KUBE-KUBELET-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-NODEPORTS (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-PROXY-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-PROXY-FIREWALL (3 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination




Chain KUBE-SEP-RP3NPELGJOKVPZER (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.1.3           0.0.0.0/0            /* default/np-service */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/np-service */ tcp to:10.244.1.3:8080

Chain KUBE-SEP-SF3LG62VAE5ALYDV (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.4           0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.4:53

Chain KUBE-SEP-T4U2PF73XRV27O6N (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.2.3           0.0.0.0/0            /* default/np-service */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/np-service */ tcp to:10.244.2.3:8080

Chain KUBE-SEP-WXWGHGKZOCNYRYI7 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.4           0.0.0.0/0            /* kube-system/kube-dns:dns */
    0     0 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */ udp to:10.244.0.4:53

Chain KUBE-SEP-YIL6JZP7A3QYXJU2 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.2           0.0.0.0/0            /* kube-system/kube-dns:dns */
    0     0 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */ udp to:10.244.0.2:53

Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  *      *       0.0.0.0/0            10.96.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443
    0     0 KUBE-SVC-OI3ES3UZPSOHIVZW  tcp  --  *      *       0.0.0.0/0            10.96.191.124        /* default/np-service cluster IP */ tcp dpt:80
    0     0 KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
    0     0 KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
    0     0 KUBE-SVC-JD5MR3NA4I4DYORP  tcp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
    1    60 KUBE-NODEPORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
    0     0 KUBE-SEP-IT2ZTR26TO4XFPTO  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp -> 10.244.0.2:53 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-SF3LG62VAE5ALYDV  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp -> 10.244.0.4:53 */

Chain KUBE-SVC-JD5MR3NA4I4DYORP (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
    0     0 KUBE-SEP-N4G2XR5TDX7PQE7P  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics -> 10.244.0.2:9153 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-PUHFDAMRBZWCPADU  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics -> 10.244.0.4:9153 */

Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.96.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443
    0     0 KUBE-SEP-7NBDIM4CRVL5CDQU  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https -> 192.168.228.3:6443 */

Chain KUBE-SVC-OI3ES3UZPSOHIVZW (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.96.191.124        /* default/np-service cluster IP */ tcp dpt:80
    0     0 KUBE-SEP-RP3NPELGJOKVPZER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/np-service -> 10.244.1.3:8080 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-T4U2PF73XRV27O6N  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/np-service -> 10.244.2.3:8080 */

Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  udp  --  *      *      !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
    0     0 KUBE-SEP-YIL6JZP7A3QYXJU2  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns -> 10.244.0.2:53 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-WXWGHGKZOCNYRYI7  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns -> 10.244.0.4:53 */
bells17bells17

https://zenn.dev/bells17/scraps/5e41da598a8266#comment-57a7a1c6d42732
の続き

kube-proxy-example-worker2 で動作する kube-proxy のログ(-v 4をつけて出力を増やした)

ちょっとdeploymentを更新して再applyしたんだけど、ログが長すぎて更新できなかったのでログは古いやつ

$ kubectl -n kube-system logs kube-proxy-tvs52 # kube-proxy に -v 4 をつけてログを増やした
I1107 16:27:39.299310       1 flags.go:64] FLAG: --bind-address="0.0.0.0"
I1107 16:27:39.299362       1 flags.go:64] FLAG: --bind-address-hard-fail="false"
I1107 16:27:39.299365       1 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
I1107 16:27:39.299368       1 flags.go:64] FLAG: --cleanup="false"
I1107 16:27:39.299370       1 flags.go:64] FLAG: --cluster-cidr=""
I1107 16:27:39.299372       1 flags.go:64] FLAG: --config="/var/lib/kube-proxy/config.conf"
I1107 16:27:39.299374       1 flags.go:64] FLAG: --config-sync-period="15m0s"
I1107 16:27:39.299379       1 flags.go:64] FLAG: --conntrack-max-per-core="32768"
I1107 16:27:39.299381       1 flags.go:64] FLAG: --conntrack-min="131072"
I1107 16:27:39.299382       1 flags.go:64] FLAG: --conntrack-tcp-timeout-close-wait="1h0m0s"
I1107 16:27:39.299384       1 flags.go:64] FLAG: --conntrack-tcp-timeout-established="24h0m0s"
I1107 16:27:39.299388       1 flags.go:64] FLAG: --detect-local-mode=""
I1107 16:27:39.299391       1 flags.go:64] FLAG: --feature-gates=""
I1107 16:27:39.299393       1 flags.go:64] FLAG: --healthz-bind-address="0.0.0.0:10256"
I1107 16:27:39.299396       1 flags.go:64] FLAG: --healthz-port="10256"
I1107 16:27:39.299398       1 flags.go:64] FLAG: --help="false"
I1107 16:27:39.299399       1 flags.go:64] FLAG: --hostname-override="kube-proxy-example-worker2"
I1107 16:27:39.299401       1 flags.go:64] FLAG: --iptables-localhost-nodeports="true"
I1107 16:27:39.299402       1 flags.go:64] FLAG: --iptables-masquerade-bit="14"
I1107 16:27:39.299403       1 flags.go:64] FLAG: --iptables-min-sync-period="1s"
I1107 16:27:39.299405       1 flags.go:64] FLAG: --iptables-sync-period="30s"
I1107 16:27:39.299406       1 flags.go:64] FLAG: --ipvs-exclude-cidrs="[]"
I1107 16:27:39.299421       1 flags.go:64] FLAG: --ipvs-min-sync-period="0s"
I1107 16:27:39.299427       1 flags.go:64] FLAG: --ipvs-scheduler=""
I1107 16:27:39.299432       1 flags.go:64] FLAG: --ipvs-strict-arp="false"
I1107 16:27:39.299436       1 flags.go:64] FLAG: --ipvs-sync-period="30s"
I1107 16:27:39.299441       1 flags.go:64] FLAG: --ipvs-tcp-timeout="0s"
I1107 16:27:39.299445       1 flags.go:64] FLAG: --ipvs-tcpfin-timeout="0s"
I1107 16:27:39.299450       1 flags.go:64] FLAG: --ipvs-udp-timeout="0s"
I1107 16:27:39.299456       1 flags.go:64] FLAG: --kube-api-burst="10"
I1107 16:27:39.299461       1 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I1107 16:27:39.299465       1 flags.go:64] FLAG: --kube-api-qps="5"
I1107 16:27:39.299471       1 flags.go:64] FLAG: --kubeconfig=""
I1107 16:27:39.299482       1 flags.go:64] FLAG: --log-flush-frequency="5s"
I1107 16:27:39.299488       1 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
I1107 16:27:39.299493       1 flags.go:64] FLAG: --masquerade-all="false"
I1107 16:27:39.299497       1 flags.go:64] FLAG: --master=""
I1107 16:27:39.299503       1 flags.go:64] FLAG: --metrics-bind-address="127.0.0.1:10249"
I1107 16:27:39.299507       1 flags.go:64] FLAG: --metrics-port="10249"
I1107 16:27:39.299511       1 flags.go:64] FLAG: --nodeport-addresses="[]"
I1107 16:27:39.299518       1 flags.go:64] FLAG: --oom-score-adj="-999"
I1107 16:27:39.299522       1 flags.go:64] FLAG: --pod-bridge-interface=""
I1107 16:27:39.299531       1 flags.go:64] FLAG: --pod-interface-name-prefix=""
I1107 16:27:39.299536       1 flags.go:64] FLAG: --profiling="false"
I1107 16:27:39.299541       1 flags.go:64] FLAG: --proxy-mode=""
I1107 16:27:39.299546       1 flags.go:64] FLAG: --proxy-port-range=""
I1107 16:27:39.299551       1 flags.go:64] FLAG: --show-hidden-metrics-for-version=""
I1107 16:27:39.299555       1 flags.go:64] FLAG: --v="4"
I1107 16:27:39.299559       1 flags.go:64] FLAG: --version="false"
I1107 16:27:39.299574       1 flags.go:64] FLAG: --vmodule=""
I1107 16:27:39.299579       1 flags.go:64] FLAG: --write-config-to=""
I1107 16:27:39.300211       1 feature_gate.go:249] feature gates: &{map[]}
I1107 16:27:39.300338       1 feature_gate.go:249] feature gates: &{map[]}
I1107 16:27:39.307383       1 node.go:141] Successfully retrieved node IP: 192.168.228.4
I1107 16:27:39.307410       1 server_others.go:110] "Detected node IP" address="192.168.228.4"
I1107 16:27:39.307426       1 server_others.go:425] "Defaulting detect-local-mode" localModeClusterCIDR="ClusterCIDR"
I1107 16:27:39.307434       1 server_others.go:147] "DetectLocalMode" localMode="ClusterCIDR"
I1107 16:27:39.315685       1 server_others.go:190] "Using iptables Proxier"
I1107 16:27:39.315722       1 server_others.go:197] "kube-proxy running in dual-stack mode" ipFamily=IPv4
I1107 16:27:39.315731       1 server_others.go:198] "Creating dualStackProxier for iptables"
I1107 16:27:39.315742       1 server_others.go:481] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
I1107 16:27:39.315785       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I1107 16:27:39.315868       1 proxier.go:269] "Using iptables mark for masquerade" ipFamily=IPv4 mark="0x00004000"
I1107 16:27:39.315961       1 proxier.go:303] "Iptables sync params" ipFamily=IPv4 minSyncPeriod="1s" syncPeriod="30s" burstSyncs=2
I1107 16:27:39.315993       1 proxier.go:313] "Iptables supports --random-fully" ipFamily=IPv4
I1107 16:27:39.316020       1 proxier.go:269] "Using iptables mark for masquerade" ipFamily=IPv6 mark="0x00004000"
I1107 16:27:39.316076       1 proxier.go:303] "Iptables sync params" ipFamily=IPv6 minSyncPeriod="1s" syncPeriod="30s" burstSyncs=2
I1107 16:27:39.316094       1 proxier.go:313] "Iptables supports --random-fully" ipFamily=IPv6
I1107 16:27:39.316182       1 server.go:657] "Version info" version="v1.27.1"
I1107 16:27:39.316197       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1107 16:27:39.316204       1 oom_linux.go:65] attempting to set "/proc/self/oom_score_adj" to "-999"
I1107 16:27:39.316534       1 bounded_frequency_runner.go:192] sync-runner Loop running
I1107 16:27:39.316553       1 bounded_frequency_runner.go:192] sync-runner Loop running
I1107 16:27:39.316943       1 reflector.go:287] Starting reflector *v1.EndpointSlice (15m0s) from vendor/k8s.io/client-go/informers/factory.go:150
I1107 16:27:39.316954       1 reflector.go:323] Listing and watching *v1.EndpointSlice from vendor/k8s.io/client-go/informers/factory.go:150
I1107 16:27:39.317684       1 reflector.go:287] Starting reflector *v1.Service (15m0s) from vendor/k8s.io/client-go/informers/factory.go:150
I1107 16:27:39.317689       1 reflector.go:323] Listing and watching *v1.Service from vendor/k8s.io/client-go/informers/factory.go:150
I1107 16:27:39.317791       1 proxier_health.go:146] "Starting healthz HTTP server" address="0.0.0.0:10256"
I1107 16:27:39.317810       1 config.go:188] "Starting service config controller"
I1107 16:27:39.317817       1 config.go:97] "Starting endpoint slice config controller"
I1107 16:27:39.317822       1 shared_informer.go:311] Waiting for caches to sync for service config
I1107 16:27:39.317822       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I1107 16:27:39.317862       1 reflector.go:287] Starting reflector *v1.Node (15m0s) from vendor/k8s.io/client-go/informers/factory.go:150
I1107 16:27:39.317866       1 reflector.go:323] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:150
I1107 16:27:39.317979       1 config.go:315] "Starting node config controller"
I1107 16:27:39.317982       1 shared_informer.go:311] Waiting for caches to sync for node config
I1107 16:27:39.319170       1 config.go:116] "Calling handler.OnEndpointSliceAdd" endpoints="default/kubernetes"
I1107 16:27:39.319574       1 config.go:334] "Calling handler.OnNodeAdd"
I1107 16:27:39.319608       1 config.go:116] "Calling handler.OnEndpointSliceAdd" endpoints="default/np-service-72gzs"
I1107 16:27:39.319622       1 config.go:207] "Calling handler.OnServiceAdd"
I1107 16:27:39.319636       1 service.go:324] "Service updated ports" service="default/kubernetes" portCount=1
I1107 16:27:39.319644       1 config.go:207] "Calling handler.OnServiceAdd"
I1107 16:27:39.319651       1 proxier.go:621] "Updated proxier node labels" labels=map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/os:linux kubernetes.io/arch:arm64 kubernetes.io/hostname:kube-proxy-example-worker2 kubernetes.io/os:linux]
I1107 16:27:39.319659       1 config.go:116] "Calling handler.OnEndpointSliceAdd" endpoints="kube-system/kube-dns-sg226"
I1107 16:27:39.319663       1 proxier.go:621] "Updated proxier node labels" labels=map[beta.kubernetes.io/arch:arm64 beta.kubernetes.io/os:linux kubernetes.io/arch:arm64 kubernetes.io/hostname:kube-proxy-example-worker2 kubernetes.io/os:linux]
I1107 16:27:39.319652       1 service.go:324] "Service updated ports" service="default/np-service" portCount=1
I1107 16:27:39.319706       1 config.go:207] "Calling handler.OnServiceAdd"
I1107 16:27:39.319713       1 service.go:324] "Service updated ports" service="kube-system/kube-dns" portCount=3
I1107 16:27:39.319789       1 proxier.go:814] "Not syncing iptables until Services and Endpoints have been received from master"
I1107 16:27:39.319799       1 bounded_frequency_runner.go:296] sync-runner: ran, next possible in 1s, periodic in 1h0m0s
I1107 16:27:39.319790       1 proxier.go:814] "Not syncing iptables until Services and Endpoints have been received from master"
I1107 16:27:39.319803       1 bounded_frequency_runner.go:296] sync-runner: ran, next possible in 1s, periodic in 1h0m0s
I1107 16:27:39.419972       1 shared_informer.go:341] caches populated
I1107 16:27:39.419978       1 shared_informer.go:318] Caches are synced for service config
I1107 16:27:39.419981       1 config.go:195] "Calling handler.OnServiceSynced()"
I1107 16:27:39.419992       1 proxier.go:814] "Not syncing iptables until Services and Endpoints have been received from master"
I1107 16:27:39.420004       1 proxier.go:814] "Not syncing iptables until Services and Endpoints have been received from master"
I1107 16:27:39.420025       1 shared_informer.go:341] caches populated
I1107 16:27:39.420035       1 shared_informer.go:318] Caches are synced for node config
I1107 16:27:39.420040       1 config.go:322] "Calling handler.OnNodeSynced()"
I1107 16:27:39.420048       1 shared_informer.go:341] caches populated
I1107 16:27:39.420057       1 shared_informer.go:318] Caches are synced for endpoint slice config
I1107 16:27:39.420060       1 config.go:104] "Calling handler.OnEndpointSlicesSynced()"
I1107 16:27:39.420102       1 service.go:457] "Adding new service port" portName="default/kubernetes:https" servicePort="10.96.0.1:443/TCP"
I1107 16:27:39.420114       1 service.go:457] "Adding new service port" portName="default/np-service" servicePort="10.96.191.124:80/TCP"
I1107 16:27:39.420119       1 service.go:457] "Adding new service port" portName="kube-system/kube-dns:dns" servicePort="10.96.0.10:53/UDP"
I1107 16:27:39.420134       1 service.go:457] "Adding new service port" portName="kube-system/kube-dns:dns-tcp" servicePort="10.96.0.10:53/TCP"
I1107 16:27:39.420139       1 service.go:457] "Adding new service port" portName="kube-system/kube-dns:metrics" servicePort="10.96.0.10:9153/TCP"
I1107 16:27:39.420156       1 endpointslicecache.go:373] "Setting endpoints for service port name" portName="default/kubernetes:https" endpoints=[192.168.228.3:6443]
I1107 16:27:39.420163       1 endpointslicecache.go:373] "Setting endpoints for service port name" portName="default/np-service" endpoints=[10.244.1.2:8080 10.244.2.2:8080]
I1107 16:27:39.420175       1 endpointslicecache.go:373] "Setting endpoints for service port name" portName="kube-system/kube-dns:dns" endpoints=[10.244.0.2:53 10.244.0.4:53]
I1107 16:27:39.420178       1 endpointslicecache.go:373] "Setting endpoints for service port name" portName="kube-system/kube-dns:dns-tcp" endpoints=[10.244.0.2:53 10.244.0.4:53]
I1107 16:27:39.420185       1 endpointslicecache.go:373] "Setting endpoints for service port name" portName="kube-system/kube-dns:metrics" endpoints=[10.244.0.2:9153 10.244.0.4:9153]
I1107 16:27:39.420191       1 proxier.go:842] "Newly-active UDP service may have stale conntrack entries" servicePortName="kube-system/kube-dns:dns"
I1107 16:27:39.420194       1 proxier.go:857] "Syncing iptables rules"
I1107 16:27:39.435921       1 iptables.go:358] "Running" command="iptables-save" arguments=[-t nat]
I1107 16:27:39.438269       1 proxier.go:1573] "Reloading service iptables data" numServices=5 numEndpoints=9 numFilterChains=6 numFilterRules=4 numNATChains=19 numNATRules=45
I1107 16:27:39.438300       1 iptables.go:423] "Running" command="iptables-restore" arguments=[-w 5 -W 100000 --noflush --counters]
I1107 16:27:39.440883       1 proxier.go:1626] "Deleting conntrack stale entries for services" IPs=[10.96.0.10]
I1107 16:27:39.440925       1 conntrack.go:66] Clearing conntrack entries [-D --orig-dst 10.96.0.10 -p udp]
I1107 16:27:39.442632       1 proxier.go:1632] "Deleting conntrack stale entries for services" nodePorts=[]
I1107 16:27:39.442660       1 proxier.go:1639] "Deleting stale endpoint connections" endpoints=[]
I1107 16:27:39.442677       1 proxier.go:822] "SyncProxyRules complete" elapsed="22.604208ms"
I1107 16:27:39.442687       1 proxier.go:857] "Syncing iptables rules"
I1107 16:27:39.459385       1 iptables.go:358] "Running" command="ip6tables-save" arguments=[-t nat]
I1107 16:27:39.460609       1 proxier.go:1573] "Reloading service iptables data" numServices=0 numEndpoints=0 numFilterChains=5 numFilterRules=3 numNATChains=4 numNATRules=5
I1107 16:27:39.460632       1 iptables.go:423] "Running" command="ip6tables-restore" arguments=[-w 5 -W 100000 --noflush --counters]
I1107 16:27:39.461490       1 proxier.go:1626] "Deleting conntrack stale entries for services" IPs=[]
I1107 16:27:39.461506       1 proxier.go:1632] "Deleting conntrack stale entries for services" nodePorts=[]
I1107 16:27:39.461515       1 proxier.go:1639] "Deleting stale endpoint connections" endpoints=[]
I1107 16:27:39.461548       1 proxier.go:822] "SyncProxyRules complete" elapsed="18.83516ms"
bells17bells17

https://zenn.dev/bells17/scraps/5e41da598a8266#comment-57a7a1c6d42732
の続き

kube-proxy-example-worker2 の iptables(nat+mangle)

$ iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 2 packets, 120 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    2   120 KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
    0     0 DOCKER_OUTPUT  all  --  *      *       0.0.0.0/0            198.19.248.254      

Chain INPUT (policy ACCEPT 2 packets, 120 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 59 packets, 4018 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   76  5748 KUBE-SERVICES  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service portals */
   54  4530 DOCKER_OUTPUT  all  --  *      *       0.0.0.0/0            198.19.248.254      

Chain POSTROUTING (policy ACCEPT 73 packets, 5670 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   76  5748 KUBE-POSTROUTING  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes postrouting rules */
    0     0 DOCKER_POSTROUTING  all  --  *      *       0.0.0.0/0            198.19.248.254      
   40  2878 KIND-MASQ-AGENT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type !LOCAL /* kind-masq-agent: ensure nat POSTROUTING directs all non-LOCAL destination traffic to our custom KIND-MASQ-AGENT chain */

Chain DOCKER_OUTPUT (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            198.19.248.254       tcp dpt:53 to:127.0.0.11:41377
   54  4530 DNAT       udp  --  *      *       0.0.0.0/0            198.19.248.254       udp dpt:53 to:127.0.0.11:44925

Chain DOCKER_POSTROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 SNAT       tcp  --  *      *       127.0.0.11           0.0.0.0/0            to:198.19.248.254:53
    0     0 SNAT       udp  --  *      *       127.0.0.11           0.0.0.0/0            to:198.19.248.254:53

Chain KIND-MASQ-AGENT (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  *      *       0.0.0.0/0            10.244.0.0/16        /* kind-masq-agent: local traffic is not subject to MASQUERADE */
   40  2878 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kind-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain) */

Chain KUBE-EXT-OI3ES3UZPSOHIVZW (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* masquerade traffic for default/np-service external destinations */
    0     0 KUBE-SVC-OI3ES3UZPSOHIVZW  all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain KUBE-KUBELET-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-MARK-MASQ (15 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x4000

Chain KUBE-NODEPORTS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-EXT-OI3ES3UZPSOHIVZW  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/np-service */ tcp dpt:31786

Chain KUBE-POSTROUTING (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match ! 0x4000/0x4000
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK xor 0x4000
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service traffic requiring SNAT */ random-fully

Chain KUBE-PROXY-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-SEP-7NBDIM4CRVL5CDQU (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       192.168.228.3        0.0.0.0/0            /* default/kubernetes:https */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https */ tcp to:192.168.228.3:6443

Chain KUBE-SEP-IT2ZTR26TO4XFPTO (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.2           0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.2:53

Chain KUBE-SEP-N4G2XR5TDX7PQE7P (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.2           0.0.0.0/0            /* kube-system/kube-dns:metrics */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics */ tcp to:10.244.0.2:9153

Chain KUBE-SEP-PUHFDAMRBZWCPADU (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.4           0.0.0.0/0            /* kube-system/kube-dns:metrics */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics */ tcp to:10.244.0.4:9153

Chain KUBE-SEP-RP3NPELGJOKVPZER (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.1.3           0.0.0.0/0            /* default/np-service */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/np-service */ tcp to:10.244.1.3:8080

Chain KUBE-SEP-SF3LG62VAE5ALYDV (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.4           0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.4:53

Chain KUBE-SEP-T4U2PF73XRV27O6N (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.2.3           0.0.0.0/0            /* default/np-service */
    0     0 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/np-service */ tcp to:10.244.2.3:8080

Chain KUBE-SEP-WXWGHGKZOCNYRYI7 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.4           0.0.0.0/0            /* kube-system/kube-dns:dns */
    0     0 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */ udp to:10.244.0.4:53

Chain KUBE-SEP-YIL6JZP7A3QYXJU2 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  all  --  *      *       10.244.0.2           0.0.0.0/0            /* kube-system/kube-dns:dns */
    0     0 DNAT       udp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns */ udp to:10.244.0.2:53

Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-SVC-NPX46M4PTMTKRN6Y  tcp  --  *      *       0.0.0.0/0            10.96.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443
    0     0 KUBE-SVC-OI3ES3UZPSOHIVZW  tcp  --  *      *       0.0.0.0/0            10.96.191.124        /* default/np-service cluster IP */ tcp dpt:80
    0     0 KUBE-SVC-TCOU7JCQXEZGVUNU  udp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
    0     0 KUBE-SVC-ERIFXISQEP7F7OF4  tcp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
    0     0 KUBE-SVC-JD5MR3NA4I4DYORP  tcp  --  *      *       0.0.0.0/0            10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
    1    60 KUBE-NODEPORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL

Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:53
    0     0 KUBE-SEP-IT2ZTR26TO4XFPTO  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp -> 10.244.0.2:53 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-SF3LG62VAE5ALYDV  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns-tcp -> 10.244.0.4:53 */

Chain KUBE-SVC-JD5MR3NA4I4DYORP (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
    0     0 KUBE-SEP-N4G2XR5TDX7PQE7P  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics -> 10.244.0.2:9153 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-PUHFDAMRBZWCPADU  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:metrics -> 10.244.0.4:9153 */

Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.96.0.1            /* default/kubernetes:https cluster IP */ tcp dpt:443
    0     0 KUBE-SEP-7NBDIM4CRVL5CDQU  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/kubernetes:https -> 192.168.228.3:6443 */

Chain KUBE-SVC-OI3ES3UZPSOHIVZW (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  tcp  --  *      *      !10.244.0.0/16        10.96.191.124        /* default/np-service cluster IP */ tcp dpt:80
    0     0 KUBE-SEP-RP3NPELGJOKVPZER  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/np-service -> 10.244.1.3:8080 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-T4U2PF73XRV27O6N  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/np-service -> 10.244.2.3:8080 */

Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 KUBE-MARK-MASQ  udp  --  *      *      !10.244.0.0/16        10.96.0.10           /* kube-system/kube-dns:dns cluster IP */ udp dpt:53
    0     0 KUBE-SEP-YIL6JZP7A3QYXJU2  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns -> 10.244.0.2:53 */ statistic mode random probability 0.50000000000
    0     0 KUBE-SEP-WXWGHGKZOCNYRYI7  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* kube-system/kube-dns:dns -> 10.244.0.4:53 */
$ iptables -t mangle -L -n -v
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-IPTABLES-HINT (0 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-KUBELET-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain KUBE-PROXY-CANARY (0 references)
 pkts bytes target     prot opt in     out     source               destination
bells17bells17

externalIps検証

apiVersion: v1
kind: Service
metadata:
  name: eip-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 80
  type: ClusterIP
  externalIPs:
    - "192.168.228.3"
    - "192.168.228.5"
    - "192.168.228.4"

↑をapply

$ root@kind-control-plane:/# curl 192.168.228.3:8080 -o /dev/null -w '%{http_code}\n' -s
200

クラスター外からアクセスできた