==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: kube-dns-845c6bc884-kd2wk Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: kube-dns Node: gke-xlou-cdm-default-pool-9d1e5395-uo2k/10.142.0.93 Start Time: Sat, 03 May 2025 10:38:43 +0000 Labels: k8s-app=kube-dns pod-template-hash=845c6bc884 Annotations: components.gke.io/component-name: kubedns components.gke.io/component-version: 31.1.4 kubectl.kubernetes.io/restartedAt: 2024-04-10T15:47:12Z prometheus.io/port: 10054 prometheus.io/scrape: true scheduler.alpha.kubernetes.io/critical-pod: seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Running IP: 10.106.40.5 IPs: IP: 10.106.40.5 Controlled By: ReplicaSet/kube-dns-845c6bc884 Containers: kubedns: Container ID: containerd://02394428baeb9bb31e96a0d28bbb83b81be94f754833f7e3b46ecfc0779d2600 Image: gke.gcr.io/k8s-dns-kube-dns:1.23.0-gke.20@sha256:b609a51c8aa4add2d1d0811737f177b4e944ea0781a48eead0d804722787f96f Image ID: gke.gcr.io/k8s-dns-kube-dns@sha256:b609a51c8aa4add2d1d0811737f177b4e944ea0781a48eead0d804722787f96f Ports: 10053/UDP, 10053/TCP Host Ports: 0/UDP, 0/TCP Args: --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2 State: Running Started: Sat, 03 May 2025 10:38:45 +0000 Ready: True Restart Count: 0 Limits: memory: 210Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3 Environment: PROMETHEUS_PORT: 10055 Mounts: /kube-dns-config from kube-dns-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2qg64 (ro) dnsmasq: Container ID: containerd://8ffb33925620c7acf4dd2b536c6506e8140ecc27c2e3df0dbd82f47f0131979e Image: gke.gcr.io/k8s-dns-dnsmasq-nanny:1.23.0-gke.20@sha256:e178b753d49a90ec32f1f45e0f52ce64019641d3fd45d8deadcf08cb73b8c840 Image ID: gke.gcr.io/k8s-dns-dnsmasq-nanny@sha256:e178b753d49a90ec32f1f45e0f52ce64019641d3fd45d8deadcf08cb73b8c840 Ports: 53/UDP, 53/TCP Host Ports: 0/UDP, 0/TCP Args: -v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --no-negcache --dns-forward-max=1500 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 --max-ttl=30 --max-cache-ttl=30 --max-tcp-connections=200 State: Running Started: Sat, 03 May 2025 10:38:53 +0000 Ready: True Restart Count: 0 Requests: cpu: 150m memory: 20Mi Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: Mounts: /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2qg64 (ro) sidecar: Container ID: containerd://4b64ddfce7f6ab1a2b93f0f1eecf81353ba90d9bc1f8dc1fbb0780df1a898466 Image: gke.gcr.io/k8s-dns-sidecar:1.23.0-gke.20@sha256:9e60f83b54d010a7dd7e5a868a6713ad410442c72f0b7540cda010c50651c0bc Image ID: gke.gcr.io/k8s-dns-sidecar@sha256:9e60f83b54d010a7dd7e5a868a6713ad410442c72f0b7540cda010c50651c0bc Port: 10054/TCP Host Port: 0/TCP Args: --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV State: Running Started: Sat, 03 May 2025 10:39:02 +0000 Ready: True Restart Count: 0 Requests: cpu: 10m memory: 20Mi Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2qg64 (ro) prometheus-to-sd: Container ID: containerd://46941487fdcc8cf2f57380fddf8dcf4c9945fa1272588616c27ac206f4c118e6 Image: gke.gcr.io/prometheus-to-sd:v0.11.12-gke.51@sha256:798127b7368b1a3a2851a6a336776739f32b0ed741d5d6ee07b97d6ac2998fa3 Image ID: gke.gcr.io/prometheus-to-sd@sha256:798127b7368b1a3a2851a6a336776739f32b0ed741d5d6ee07b97d6ac2998fa3 Port: Host Port: Command: /monitor --source=kubedns:http://localhost:10054?whitelisted=probe_kubedns_latency_ms,probe_kubedns_errors,probe_dnsmasq_latency_ms,probe_dnsmasq_errors,dnsmasq_misses,dnsmasq_hits --stackdriver-prefix=container.googleapis.com/internal/addons --api-override=https://monitoring.googleapis.com/ --pod-id=$(POD_NAME) --namespace-id=$(POD_NAMESPACE) --v=2 State: Running Started: Sat, 03 May 2025 10:39:05 +0000 Ready: True Restart Count: 0 Environment: POD_NAME: kube-dns-845c6bc884-kd2wk (v1:metadata.name) POD_NAMESPACE: kube-system (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2qg64 (ro) kubedns-metrics-collector: Container ID: containerd://90250bbe2e9d71768ebc57ac7c5f4226f5e21466cd272cff74e8da2d336d5271 Image: gke.gcr.io/gke-metrics-collector:20250217_2300_RC0@sha256:b78e39d6a9780ee6e86038727ce45839e6cb2519c836db9cf95cf951ea47ab70 Image ID: gke.gcr.io/gke-metrics-collector@sha256:b78e39d6a9780ee6e86038727ce45839e6cb2519c836db9cf95cf951ea47ab70 Port: Host Port: State: Running Started: Sat, 03 May 2025 10:39:06 +0000 Ready: True Restart Count: 0 Limits: cpu: 1 memory: 30Mi Requests: cpu: 5m memory: 30Mi Environment: GOMAXPROCS: 2 COLLECTOR_CONFIG_PATH: /conf/kubedns-metrics-collector-config-data.textproto SPLIT_GAUGE_BUFFER: true PROJECT_NUMBER: 941969722215 LOCATION: us-east1-d CLUSTER_NAME: xlou-cdm POD_NAMESPACE: kube-system (v1:metadata.namespace) NODE_NAME: (v1:spec.nodeName) POD_NAME: kube-dns-845c6bc884-kd2wk (v1:metadata.name) CONTAINER_NAME: kubedns-metrics-collector COMPONENT_VERSION: (v1:metadata.annotations['components.gke.io/component-version']) COMPONENT_NAME: (v1:metadata.annotations['components.gke.io/component-name']) Mounts: /conf from kubedns-metrics-collector-config-map-vol (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2qg64 (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-dns-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-dns Optional: true kubedns-metrics-collector-config-map-vol: Type: ConfigMap (a volume populated by a ConfigMap) Name: kubedns-metrics-collector-config-map Optional: false kube-api-access-2qg64: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: CriticalAddonsOnly op=Exists components.gke.io/gke-managed-components op=Exists kubernetes.io/arch=arm64:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== I0503 10:38:45.823808 1 flags.go:57] FLAG: --add-dir-header="false" I0503 10:38:45.824618 1 flags.go:57] FLAG: --alsologtostderr="false" I0503 10:38:45.824621 1 flags.go:57] FLAG: --config-dir="/kube-dns-config" I0503 10:38:45.824627 1 flags.go:57] FLAG: --config-map="" I0503 10:38:45.824630 1 flags.go:57] FLAG: --config-map-namespace="kube-system" I0503 10:38:45.824635 1 flags.go:57] FLAG: --config-period="10s" I0503 10:38:45.824641 1 flags.go:57] FLAG: --dns-bind-address="0.0.0.0" I0503 10:38:45.824645 1 flags.go:57] FLAG: --dns-port="10053" I0503 10:38:45.824650 1 flags.go:57] FLAG: --domain="cluster.local." I0503 10:38:45.824659 1 flags.go:57] FLAG: --federations="" I0503 10:38:45.824665 1 flags.go:57] FLAG: --healthz-port="8081" I0503 10:38:45.824668 1 flags.go:57] FLAG: --initial-sync-timeout="1m0s" I0503 10:38:45.824672 1 flags.go:57] FLAG: --kube-master-url="" I0503 10:38:45.824676 1 flags.go:57] FLAG: --kubecfg-file="" I0503 10:38:45.824679 1 flags.go:57] FLAG: --log-backtrace-at=":0" I0503 10:38:45.824687 1 flags.go:57] FLAG: --log-dir="" I0503 10:38:45.824691 1 flags.go:57] FLAG: --log-file="" I0503 10:38:45.824694 1 flags.go:57] FLAG: --log-file-max-size="1800" I0503 10:38:45.824703 1 flags.go:57] FLAG: --log-flush-frequency="5s" I0503 10:38:45.824709 1 flags.go:57] FLAG: --logtostderr="true" I0503 10:38:45.824713 1 flags.go:57] FLAG: --nameservers="" I0503 10:38:45.824716 1 flags.go:57] FLAG: --one-output="false" I0503 10:38:45.824719 1 flags.go:57] FLAG: --profiling="false" I0503 10:38:45.824737 1 flags.go:57] FLAG: --skip-headers="false" I0503 10:38:45.824741 1 flags.go:57] FLAG: --skip-log-headers="false" I0503 10:38:45.824744 1 flags.go:57] FLAG: --stderrthreshold="2" I0503 10:38:45.824747 1 flags.go:57] FLAG: --v="2" I0503 10:38:45.824750 1 flags.go:57] FLAG: --version="false" I0503 10:38:45.824758 1 flags.go:57] FLAG: --vmodule="" I0503 10:38:45.824799 1 dns.go:49] version: 1.23.0-gke.20 I0503 10:38:45.825165 1 server.go:73] Using configuration read from directory: /kube-dns-config with period 10s I0503 10:38:45.825195 1 server.go:126] FLAG: --add-dir-header="false" I0503 10:38:45.825202 1 server.go:126] FLAG: --alsologtostderr="false" I0503 10:38:45.825206 1 server.go:126] FLAG: --config-dir="/kube-dns-config" I0503 10:38:45.825210 1 server.go:126] FLAG: --config-map="" I0503 10:38:45.825214 1 server.go:126] FLAG: --config-map-namespace="kube-system" I0503 10:38:45.825218 1 server.go:126] FLAG: --config-period="10s" I0503 10:38:45.825225 1 server.go:126] FLAG: --dns-bind-address="0.0.0.0" I0503 10:38:45.825229 1 server.go:126] FLAG: --dns-port="10053" I0503 10:38:45.825233 1 server.go:126] FLAG: --domain="cluster.local." I0503 10:38:45.825237 1 server.go:126] FLAG: --federations="" I0503 10:38:45.825244 1 server.go:126] FLAG: --healthz-port="8081" I0503 10:38:45.825248 1 server.go:126] FLAG: --initial-sync-timeout="1m0s" I0503 10:38:45.825252 1 server.go:126] FLAG: --kube-master-url="" I0503 10:38:45.825256 1 server.go:126] FLAG: --kubecfg-file="" I0503 10:38:45.825260 1 server.go:126] FLAG: --log-backtrace-at=":0" I0503 10:38:45.825268 1 server.go:126] FLAG: --log-dir="" I0503 10:38:45.825272 1 server.go:126] FLAG: --log-file="" I0503 10:38:45.825275 1 server.go:126] FLAG: --log-file-max-size="1800" I0503 10:38:45.825282 1 server.go:126] FLAG: --log-flush-frequency="5s" I0503 10:38:45.825285 1 server.go:126] FLAG: --logtostderr="true" I0503 10:38:45.825289 1 server.go:126] FLAG: --nameservers="" I0503 10:38:45.825294 1 server.go:126] FLAG: --one-output="false" I0503 10:38:45.825301 1 server.go:126] FLAG: --profiling="false" I0503 10:38:45.825305 1 server.go:126] FLAG: --skip-headers="false" I0503 10:38:45.825308 1 server.go:126] FLAG: --skip-log-headers="false" I0503 10:38:45.825312 1 server.go:126] FLAG: --stderrthreshold="2" I0503 10:38:45.825315 1 server.go:126] FLAG: --v="2" I0503 10:38:45.825319 1 server.go:126] FLAG: --version="false" I0503 10:38:45.825344 1 server.go:126] FLAG: --vmodule="" I0503 10:38:45.825458 1 server.go:182] Starting SkyDNS server (0.0.0.0:10053) I0503 10:38:45.825591 1 server.go:194] Skydns metrics enabled (/metrics:10055) I0503 10:38:45.825606 1 dns.go:190] Starting endpointsController I0503 10:38:45.825611 1 dns.go:193] Starting serviceController I0503 10:38:45.825674 1 dns.go:186] Configuration updated: {TypeMeta:{Kind: APIVersion:} Federations:map[] StubDomains:map[] UpstreamNameservers:[]} I0503 10:38:45.825720 1 log.go:245] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0] I0503 10:38:45.825737 1 log.go:245] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0] I0503 10:38:46.326401 1 dns.go:224] Initialized services and endpoints from apiserver I0503 10:38:46.326439 1 server.go:150] Setting up Healthz Handler (/readiness) I0503 10:38:46.326464 1 server.go:155] Setting up cache handler (/cache) I0503 10:38:46.326474 1 server.go:136] Status HTTP port 8081 I0503 10:38:54.004341 1 main.go:78] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --no-negcache --dns-forward-max=1500 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 --max-ttl=30 --max-cache-ttl=30 --max-tcp-connections=200] true} /etc/k8s/dns/dnsmasq-nanny 10000000000 127.0.0.1:10053} I0503 10:38:54.004511 1 nanny.go:124] Starting dnsmasq [-k --cache-size=1000 --no-negcache --dns-forward-max=1500 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 --max-ttl=30 --max-cache-ttl=30 --max-tcp-connections=200] I0503 10:38:54.007060 1 nanny.go:149] W0503 10:38:54.007083 1 nanny.go:150] Got EOF from stdout I0503 10:38:54.007096 1 nanny.go:146] dnsmasq[11]: started, version 2.90 cachesize 1000 I0503 10:38:54.007104 1 nanny.go:146] dnsmasq[11]: compile time options: IPv6 GNU-getopt no-DBus no-UBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset no-nftset auth no-cryptohash no-DNSSEC loop-detect inotify dumpfile I0503 10:38:54.007112 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain cluster.local I0503 10:38:54.007116 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I0503 10:38:54.007119 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I0503 10:38:54.007124 1 nanny.go:146] dnsmasq[11]: reading /etc/resolv.conf I0503 10:38:54.007127 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain cluster.local I0503 10:38:54.007131 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I0503 10:38:54.007134 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I0503 10:38:54.007138 1 nanny.go:146] dnsmasq[11]: using nameserver 169.254.169.254#53 I0503 10:38:54.007143 1 nanny.go:146] dnsmasq[11]: read /etc/hosts - 9 names Flag --logtostderr has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components I0503 10:39:03.005915 1 flags.go:57] FLAG: --add-dir-header="false" I0503 10:39:03.006001 1 flags.go:57] FLAG: --alsologtostderr="false" I0503 10:39:03.006008 1 flags.go:57] FLAG: --dnsmasq-addr="127.0.0.1" I0503 10:39:03.006015 1 flags.go:57] FLAG: --dnsmasq-poll-interval-ms="5000" I0503 10:39:03.006021 1 flags.go:57] FLAG: --dnsmasq-port="53" I0503 10:39:03.006026 1 flags.go:57] FLAG: --log-backtrace-at=":0" I0503 10:39:03.006036 1 flags.go:57] FLAG: --log-dir="" I0503 10:39:03.006041 1 flags.go:57] FLAG: --log-file="" I0503 10:39:03.006046 1 flags.go:57] FLAG: --log-file-max-size="1800" I0503 10:39:03.006051 1 flags.go:57] FLAG: --log-flush-frequency="5s" I0503 10:39:03.006056 1 flags.go:57] FLAG: --logtostderr="true" I0503 10:39:03.006061 1 flags.go:57] FLAG: --one-output="false" I0503 10:39:03.006066 1 flags.go:57] FLAG: --probe="[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}]" I0503 10:39:03.006099 1 flags.go:57] FLAG: --prometheus-addr="0.0.0.0" I0503 10:39:03.006105 1 flags.go:57] FLAG: --prometheus-namespace="kubedns" I0503 10:39:03.006110 1 flags.go:57] FLAG: --prometheus-path="/metrics" I0503 10:39:03.006122 1 flags.go:57] FLAG: --prometheus-port="10054" I0503 10:39:03.006128 1 flags.go:57] FLAG: --skip-headers="false" I0503 10:39:03.006132 1 flags.go:57] FLAG: --skip-log-headers="false" I0503 10:39:03.006136 1 flags.go:57] FLAG: --stderrthreshold="2" I0503 10:39:03.006141 1 flags.go:57] FLAG: --v="2" I0503 10:39:03.006146 1 flags.go:57] FLAG: --version="false" I0503 10:39:03.006163 1 flags.go:57] FLAG: --vmodule="" I0503 10:39:03.006186 1 main.go:55] Version v1.23.0-gke.20 I0503 10:39:03.006196 1 server.go:46] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns}) I0503 10:39:03.006213 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} I0503 10:39:03.006245 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} I0503 10:39:05.216576 1 main.go:125] GCE config: &{Project:engineeringpit Zone:us-east1-d Cluster:xlou-cdm ClusterLocation:us-east1-d Instance:gke-xlou-cdm-default-pool-9d1e5395-uo2k InstanceId:2639458767409270014} I0503 10:39:05.216636 1 main.go:194] Taking source configs from flags I0503 10:39:05.216647 1 main.go:196] Taking source configs from kubernetes api server I0503 10:39:05.216652 1 main.go:128] Built the following source configs: [0xc0001124e0] I0503 10:39:05.216704 1 main.go:205] Running prometheus-to-sd, monitored target is kubedns http://localhost:10054 {"level":"info","ts":1746268746.271876,"caller":"collector/main.go:47","msg":"Starting Metrics Collector","log_first_n":2,"log_interval(s)":3600} {"level":"info","ts":1746268746.2729063,"caller":"collector/multi_target_collector.go:50","msg":"Start Metrics Collector","target_url":"http://127.0.0.1:10055/metrics","target_name":"kubedns"} {"level":"info","ts":1746268746.273175,"caller":"collector/collector.go:111","msg":"Connecting to Cloud Monitoring","target_name":"kubedns","endpoint":"monitoring.googleapis.com:443"} {"level":"error","ts":1746537726.2773585,"caller":"gcm/export.go:498","msg":"Failed to export self-observability metrics to Cloud Monitoring","error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.106.40.5:39112->172.217.203.95:443: read: connection reset by peer","stacktrace":"google3/cloud/kubernetes/metrics/common/gcm/gcm.(*exporter).startSelfObservability\n\tcloud/kubernetes/metrics/common/gcm/export.go:498"} {"level":"error","ts":1746541566.277459,"caller":"gcm/export.go:498","msg":"Failed to export self-observability metrics to Cloud Monitoring","error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.106.40.5:35478->173.194.210.95:443: read: connection reset by peer","stacktrace":"google3/cloud/kubernetes/metrics/common/gcm/gcm.(*exporter).startSelfObservability\n\tcloud/kubernetes/metrics/common/gcm/export.go:498"}