Open8
cilium 101

memo
- Requirements: https://docs.cilium.io/en/stable/operations/system_requirements/
- Getting started(kind): https://docs.cilium.io/en/stable/gettingstarted/kind/
background of cilium
-
https://thinkit.co.jp/article/15281
- ネットワークにおいてiptablesがボトルネックになるケースが存在
- スケールイン/アウトが多発する環境下でのiptablesのreloadによるオーバーヘッド )
- podをsidecar構成で利用するケースでのpod間通信で発生するオーバーヘッド
- Socket - (TCP/IP) - eth0(L2, Ethernet) - outer network - ...
- https://docs.google.com/presentation/d/12pjduaqLtIMhOFlOJk18e-540Qp-o0nt1kZYTJGFFmM/edit#slide=id.g71706f0e87_1_0 (flannelベースのNW図、清書したら公開するかも)
- L2 ~ L3層での処理を複数実行しているので複雑
- Socket - (TCP/IP) - eth0(L2, Ethernet) - outer network - ...
- L3/L4でのNetwork Policyでは不十分
- ネットワークにおいてiptablesがボトルネックになるケースが存在
what is cilium?
- points
- bpf (eBPF, Linux kernel内で利用するパケットフィルター)の活用 → NWの簡略化+高速化
- overview
-
https://github.com/cilium/cilium#functionality-overview (抜粋)
-
Protect and secure APIs transparently
: L7レベルでのフィルタリングを実現 -
Secure service to service communication based on identities
: Security IDを付与した上で行うフィルタリング- podのスケールに対してNodeのiptablesを書き換える従来のCNIのアプローチとは異なる
-
Simple Networking
: 「A simple flat Layer 3 network」 -
Load Balancing
: 外部との通信で利用しているkube-proxyの置き換え -
Bandwidth Management
: eBPFを用いた帯域幅の管理 -
Monitoring and Troubleshooting
: NWのvisibilityを高める-
Event monitoring with metadata
: packetのdropの検知 + 報告 -
Policy decision tracing
: 実際のtrafficに対するpolicyの対応結果 -
hubble
: TBD
-
-
-
https://github.com/cilium/cilium#functionality-overview (抜粋)
- packet flow
- https://static.sched.com/hosted_files/kccnceu20/8f/Aug19_eBPF_and_Kubernetes_Little_Helper_Minions_for_Scaling_Microservices_Daniel_Borkmann.pdf (KubeCon + CloudNativeCon Europe 2020 day2)
- P28 (before cilium)
- ic ingress以降でiptablesでの各チェインの処理が存在(
RAW_PREROUTING
~NAT_POSTROUTING
)
- ic ingress以降でiptablesでの各チェインの処理が存在(
- P30 (cilium)
- TC Ingress → TC Egressでiptables関連の処理をskipしている?
- iptables周りの処理によるオーバヘッドの削減
- TC Ingress → TC Egressでiptables関連の処理をskipしている?

1: kindでk8s Clusterをcreate
(control plane: 1 / worker plane: 1)
~/w/k/kind ❯❯❯ less cilium.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
networking:
disableDefaultCNI: true
podSubnet: "10.10.0.0/16"
serviceSubnet: "10.11.0.0/16"

2: helmでのsetup (repositoryのadd → install)
~/w/k/kind ❯❯❯ helm repo add cilium https://helm.cilium.io/
~/w/k/kind ❯❯❯ helm install cilium cilium/cilium --version 1.9.3 \
--namespace kube-system \
--set nodeinit.enabled=true \
--set kubeProxyReplacement=partial \
--set hostServices.enabled=false \
--set externalIPs.enabled=true \
--set nodePort.enabled=true \
--set hostPort.enabled=true \
--set bpf.masquerade=false \
--set image.pullPolicy=IfNotPresent \
--set ipam.mode=kubernetes
NAME: cilium
LAST DEPLOYED: Thu Jan 28 20:41:52 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
You have successfully installed Cilium with Hubble.
Your release version is 1.9.3.
For any further help, visit https://docs.cilium.io/en/v1.9/gettinghelp

3: Apply connectivity-check/connectivity-check.yaml
-
https://github.com/cilium/cilium/blob/master/examples/kubernetes/connectivity-check/connectivity-check.yaml
- liveness / readiness checkの設定を利用してconnectivity checkを行う
- 起動していないpodが存在していれば定義されてる liveness / readiness checkでこけている
- (= 特定のpodとの疎通ができていない)
apiVersion: apps/v1 kind: Deployment --- metadata: name: pod-to-a ... command: - /bin/ash - -c - sleep 1000000000 readinessProbe: timeoutSeconds: 7 exec: command: - curl - -sS - --fail - --connect-timeout - "5" - -o - /dev/null - echo-a:8080/public livenessProbe: timeoutSeconds: 7 exec: command: - curl - -sS - --fail - --connect-timeout - "5" - -o - /dev/null // echo-a serverへのconnectivityを確認する - echo-a:8080/public
- コケているpodがあれば、そのpodに設定されているliveness / readness probeを確認する
~/w/k/kind ❯❯❯ kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
cilium-bstll 1/1 Running 0 8m46s
cilium-hzlc8 1/1 Running 0 8m46s
cilium-node-init-4dsdn 1/1 Running 1 8m46s
cilium-node-init-zttjd 1/1 Running 1 8m46s
cilium-operator-7c468bb4b6-szzbm 1/1 Running 0 8m46s
cilium-operator-7c468bb4b6-wwcmx 1/1 Running 0 8m46s
coredns-6955765f44-4lt7v 1/1 Running 0 19m
coredns-6955765f44-7xwqv 1/1 Running 0 19m
etcd-kind-control-plane 1/1 Running 0 19m
kube-apiserver-kind-control-plane 1/1 Running 0 19m
kube-controller-manager-kind-control-plane 1/1 Running 0 19m
kube-proxy-5pbgz 1/1 Running 0 19m
kube-proxy-fdq49 1/1 Running 0 19m
kube-scheduler-kind-control-plane 1/1 Running 0 19m
~/w/k/kind ❯❯❯ kubectl apply -n cilium-test -f https://raw.githubusercontent.com/cilium/cilium/1.9.3/examples/kubernetes/connectivity-check/connectivity-check.yaml
deployment.apps/echo-a created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
deployment.apps/pod-to-a created
deployment.apps/pod-to-external-1111 created
deployment.apps/pod-to-a-denied-cnp created
deployment.apps/pod-to-a-allowed-cnp created
deployment.apps/pod-to-external-fqdn-allow-google-cnp created
deployment.apps/pod-to-b-multi-node-clusterip created
deployment.apps/pod-to-b-multi-node-headless created
deployment.apps/host-to-b-multi-node-clusterip created
deployment.apps/host-to-b-multi-node-headless created
deployment.apps/pod-to-b-multi-node-nodeport created
deployment.apps/pod-to-b-intra-node-nodeport created
service/echo-a created
service/echo-b created
service/echo-b-headless created
service/echo-b-host-headless created
ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created
ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created
~/w/k/kind ❯❯❯ k get po -n cilium-test
NAME READY STATUS RESTARTS AGE
echo-a-76c5d9bd76-v79x9 1/1 Running 0 161m
echo-b-795c4b4f76-9psvr 1/1 Running 0 161m
echo-b-host-6b7fc94b7c-gxr88 1/1 Running 0 161m
host-to-b-multi-node-clusterip-85476cd779-nmgxk 0/1 Pending 0 161m
host-to-b-multi-node-headless-dc6c44cb5-x5xt2 0/1 Pending 0 161m
pod-to-a-79546bc469-z4dmv 1/1 Running 0 161m
pod-to-a-allowed-cnp-58b7f7fb8f-z4bq5 1/1 Running 0 161m
pod-to-a-denied-cnp-6967cb6f7f-56zjc 1/1 Running 0 161m
pod-to-b-intra-node-nodeport-9b487cf89-vjc6f 1/1 Running 0 161m
pod-to-b-multi-node-clusterip-7db5dfdcf7-mvc67 0/1 Pending 0 161m
pod-to-b-multi-node-headless-7d44b85d69-pwv4t 0/1 Pending 0 161m
pod-to-b-multi-node-nodeport-7ffc76db7c-nsmrq 0/1 Pending 0 161m
pod-to-external-1111-d56f47579-72jnr 1/1 Running 0 161m
pod-to-external-fqdn-allow-google-cnp-78986f4bcf-cv8rk 1/1 Running 0 161m
一部のPodはNode Affinity/Anti-Affinityによってschedulingされない
~/w/k/kind ❯❯❯ k describe po host-to-b-multi-node-clusterip-85476cd779-nmgxk -n cilium-test
Name: host-to-b-multi-node-clusterip-85476cd779-nmgxk
Namespace: cilium-test
Priority: 0
Node: <none>
Labels: name=host-to-b-multi-node-clusterip
pod-template-hash=85476cd779
Annotations: <none>
Status: Pending
IP:
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 85s (x112 over 168m) default-scheduler 0/2 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) had taints that the pod didn't tolerate.
pendingになっているpodの定義を確認すると、下記のようなAffinityが設定されている
pod-to-b-multi-node-headless-7d44b85d69-pwv4t
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: name
operator: In
values:
- echo-b
topologyKey: kubernetes.io/hostname

Check iptables
- memo
-
https://docs.cilium.io/en/v1.9/concepts/networking/routing/
- default で
Encapsulation mode
(VXLANを利用したOverlay NW方式) を採用
- default で
-
https://docs.cilium.io/en/v1.9/concepts/networking/routing/
書き換え前
root@kind-worker:/# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- anywhere anywhere ctstate INVALID
ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- 10.10.0.0/16 anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere 10.10.0.0/16 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (3 references)
target prot opt source destination
REJECT tcp -- anywhere 10.11.0.10 /* kube-system/kube-dns:dns-tcp has no endpoints */ tcp dpt:53 reject-with icmp-port-unreachable
REJECT tcp -- anywhere 10.11.0.10 /* kube-system/kube-dns:metrics has no endpoints */ tcp dpt:9153 reject-with icmp-port-unreachable
REJECT udp -- anywhere 10.11.0.10 /* kube-system/kube-dns:dns has no endpoints */ udp dpt:53 reject-with icmp-port-unreachable
書き換え後
-
https://docs.cilium.io/en/v1.9/concepts/ebpf/iptables/
- iptablesがcilliumによって書き換えられている
- (
KUBE-xx
の定義はkube-proxyによる書き換えだったはず)
- (
- iptablesがcilliumによって書き換えられている
root@kind-control-plane:/# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
CILIUM_INPUT all -- anywhere anywhere /* cilium-feeder: CILIUM_INPUT */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
CILIUM_FORWARD all -- anywhere anywhere /* cilium-feeder: CILIUM_FORWARD */
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
CILIUM_OUTPUT all -- anywhere anywhere /* cilium-feeder: CILIUM_OUTPUT */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain CILIUM_FORWARD (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere /* cilium: any->cluster on cilium_host forward accept */
ACCEPT all -- anywhere anywhere /* cilium: cluster->any on cilium_host forward accept (nodeport) */
ACCEPT all -- anywhere anywhere /* cilium: cluster->any on lxc+ forward accept */
ACCEPT all -- anywhere anywhere /* cilium: cluster->any on cilium_net forward accept (nodeport) */
Chain CILIUM_INPUT (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere mark match 0x200/0xf00 /* cilium: ACCEPT for proxy traffic */
Chain CILIUM_OUTPUT (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere mark match 0xa00/0xfffffeff /* cilium: ACCEPT for proxy return traffic */
MARK all -- anywhere anywhere mark match ! 0xe00/0xf00 mark match ! 0xd00/0xf00 mark match ! 0xa00/0xe00 /* cilium: host->any mark as from host */ MARK xset 0xc00/0xf00
Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- anywhere anywhere ctstate INVALID
ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- 10.10.0.0/16 anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere 10.10.0.0/16 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (3 references)
target prot opt source destination

Check ARP table
書き換え後
- ARPもcilliumによって書き換えられている
- TBD
root@kind-control-plane:/# ip r
default via 172.17.0.1 dev eth0
10.10.0.0/24 via 10.10.0.25 dev cilium_host src 10.10.0.25 mtu 1450
10.10.0.25 dev cilium_host scope link
10.10.1.0/24 via 10.10.0.25 dev cilium_host src 10.10.0.25 mtu 1450
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.3
root@kind-control-plane:/# ip n
172.17.0.2 dev eth0 lladdr 02:42:ac:11:00:02 PERMANENT
172.17.0.1 dev eth0 lladdr 02:42:9a:47:79:ca STALE
root@kind-control-plane:/# bridge link show
root@kind-control-plane:/#

Identity-Aware and HTTP-Aware Policy Enforcement
-
https://docs.cilium.io/en/v1.9/gettingstarted/http/
- L7ベースのpolicy設定と利用のためのexample
- memo
- cilium側のlistがpodのデプロイ後に更新されている
-
xwing/tiefighter
→deathstar.default.svc.cluster.local/v1/request-landing
への疎通はOK- 特にNetwork Policyも設定していないので
~/w/k/kind ❯❯❯ kubectl -n kube-system get pods -l k8s-app=cilium
NAME READY STATUS RESTARTS AGE
cilium-jfvf5 1/1 Running 1 22m
cilium-m68jt 1/1 Running 0 22m
~/w/k/kind ❯❯❯ k get pods,svc
NAME READY STATUS RESTARTS AGE
pod/deathstar-657477f57d-k8npf 1/1 Running 0 58s
pod/deathstar-657477f57d-mfbw2 1/1 Running 0 58s
pod/tiefighter 1/1 Running 0 58s
pod/xwing 1/1 Running 0 58s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/deathstar ClusterIP 10.11.73.130 <none> 80/TCP 58s
service/kubernetes ClusterIP 10.11.0.1 <none> 443/TCP 112m
// 新規に作成されたpod群がcilium側で認識されていることが分かる
~/w/k/kind ❯❯❯ kubectl -n kube-system exec cilium-m68jt -- cilium endpoint list
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
53 Disabled Disabled 4 reserved:health 10.10.1.184 ready
180 Disabled Disabled 4633 k8s:class=deathstar 10.10.1.215 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
183 Disabled Disabled 30430 k8s:class=xwing 10.10.1.127 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=alliance
378 Disabled Disabled 1 reserved:host ready
582 Disabled Disabled 4633 k8s:class=deathstar 10.10.1.253 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
827 Disabled Disabled 23349 k8s:class=tiefighter 10.10.1.245 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:org=empire
1718 Disabled Disabled 1513 k8s:io.cilium.k8s.policy.cluster=default 10.10.1.63 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1963 Disabled Disabled 1513 k8s:io.cilium.k8s.policy.cluster=default 10.10.1.198 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
L3/L4
- podにassignされたIPではなく、podに付与されたlabelを用いてNW Policyを設定
-
org=empire
のlabelが付与された通信のみ許可するようなPolicyを設定 - Policy用のCustom Resourceが作成される (
k get cnp
で参照可能)
-
~/w/k/cilium ❯❯❯ kubectl create -f ./nwpolicy_1.yaml
ciliumnetworkpolicy.cilium.io/rule1 created
// waiting timeout :(
~/w/k/cilium ❯❯❯ kubectl exec xwing -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
^C
~/w/k/cilium ❯❯❯ kubectl get cnp
NAME AGE
rule1 110s
~/w/k/cilium ❯❯❯ k describe po xwing ✘ 1
Name: xwing
Namespace: default
Priority: 0
Node: kind-worker/172.17.0.3
Start Time: Sat, 06 Feb 2021 18:18:05 +0900
Labels: class=xwing
org=alliance
Annotations: <none>
Status: Running
IP: 10.10.1.127
...
HTTP-aware L7 Policy
- podへの通信は許可したいが、特定のAPI pathの通信はblockしたい
- ex)
v1/exhaust-port
→ panicを起こすのでRequestをdenyしたい -
CiliumNetworkPolicy
で対象となるpodの指定(label)とPolicyを設定する
- ex)
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "rule1"
spec:
description: "L7 policy to restrict access to specific HTTP call"
endpointSelector:
matchLabels:
org: empire
class: deathstar
ingress:
- fromEndpoints:
- matchLabels:
org: empire
toPorts:
- ports:
- port: "80"
protocol: TCP
rules:
http:
- method: "POST"
path: "/v1/request-landing"
~/w/k/cilium ❯❯❯ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Panic: deathstar exploded
goroutine 1 [running]:
main.HandleGarbage(0x2080c3f50, 0x2, 0x4, 0x425c0, 0x5, 0xa)
/code/src/github.com/empire/deathstar/
temp/main.go:9 +0x64
main.main()
/code/src/github.com/empire/deathstar/
temp/main.go:5 +0x85
~/w/k/cilium ❯❯❯ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.9/examples/minikube/sw_l3_l4_l7_policy.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
ciliumnetworkpolicy.cilium.io/rule1 configured
// pathベースでのアクセスコントロールができる
~/w/k/cilium ❯❯❯ kubectl exec tiefighter -- curl -s -XPOST deathstar.default.svc.cluster.local/v1/request-landing
Ship landed
~/w/k/cilium ❯❯❯ kubectl exec tiefighter -- curl -s -XPUT deathstar.default.svc.cluster.local/v1/exhaust-port
Access denied
// worker用のAgent側でpolicyのconfigを保持している
~/w/k/cilium ❯❯❯ kubectl -n kube-system exec cilium-m68jt -- cilium policy get
[
{
"endpointSelector": {
"matchLabels": {
"any:class": "deathstar",
"any:org": "empire",
"k8s:io.kubernetes.pod.namespace": "default"
}
},
"ingress": [
{
"fromEndpoints": [
{
"matchLabels": {
"any:org": "empire",
"k8s:io.kubernetes.pod.namespace": "default"
}
}
],
"toPorts": [
{
"ports": [
{
"port": "80",
"protocol": "TCP"
}
],
"rules": {
"http": [
{
"path": "/v1/request-landing",
"method": "POST"
}
]
}
}
]
}
],
"labels": [
{
"key": "io.cilium.k8s.policy.derived-from",
"value": "CiliumNetworkPolicy",
"source": "k8s"
},
{
"key": "io.cilium.k8s.policy.name",
"value": "rule1",
"source": "k8s"
},
{
"key": "io.cilium.k8s.policy.namespace",
"value": "default",
"source": "k8s"
},
{
"key": "io.cilium.k8s.policy.uid",
"value": "5c2e9040-36fc-4f5b-9a0b-eed0bd4aae31",
"source": "k8s"
}
],
"description": "L7 policy to restrict access to specific HTTP call"
}
]
Revision: 4

Locking down external access with DNS-based policies
TBD