==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: kube-dns-845c6bc884-gqg2f Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Service Account: kube-dns Node: gke-xlou-cdm-frontend-18a44ad7-fq4g/10.142.0.94 Start Time: Sat, 03 May 2025 10:59:16 +0000 Labels: k8s-app=kube-dns pod-template-hash=845c6bc884 Annotations: components.gke.io/component-name: kubedns components.gke.io/component-version: 31.1.4 kubectl.kubernetes.io/restartedAt: 2024-04-10T15:47:12Z prometheus.io/port: 10054 prometheus.io/scrape: true scheduler.alpha.kubernetes.io/critical-pod: seccomp.security.alpha.kubernetes.io/pod: runtime/default Status: Running IP: 10.106.41.4 IPs: IP: 10.106.41.4 Controlled By: ReplicaSet/kube-dns-845c6bc884 Containers: kubedns: Container ID: containerd://480d1caec1bd0cc9d14409315bde5d0faeda88438ff7df137593ec73ef986b5a Image: gke.gcr.io/k8s-dns-kube-dns:1.23.0-gke.20@sha256:b609a51c8aa4add2d1d0811737f177b4e944ea0781a48eead0d804722787f96f Image ID: gke.gcr.io/k8s-dns-kube-dns@sha256:b609a51c8aa4add2d1d0811737f177b4e944ea0781a48eead0d804722787f96f Ports: 10053/UDP, 10053/TCP Host Ports: 0/UDP, 0/TCP Args: --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2 State: Running Started: Sat, 03 May 2025 10:59:20 +0000 Ready: True Restart Count: 0 Limits: memory: 210Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3 Environment: PROMETHEUS_PORT: 10055 Mounts: /kube-dns-config from kube-dns-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwt9b (ro) dnsmasq: Container ID: containerd://b3854a778690be4b6e2f15ee13e3e9f83047f26d03e0184bf5c8e85bbae7135f Image: gke.gcr.io/k8s-dns-dnsmasq-nanny:1.23.0-gke.20@sha256:e178b753d49a90ec32f1f45e0f52ce64019641d3fd45d8deadcf08cb73b8c840 Image ID: gke.gcr.io/k8s-dns-dnsmasq-nanny@sha256:e178b753d49a90ec32f1f45e0f52ce64019641d3fd45d8deadcf08cb73b8c840 Ports: 53/UDP, 53/TCP Host Ports: 0/UDP, 0/TCP Args: -v=2 -logtostderr -configDir=/etc/k8s/dns/dnsmasq-nanny -restartDnsmasq=true -- -k --cache-size=1000 --no-negcache --dns-forward-max=1500 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 --max-ttl=30 --max-cache-ttl=30 --max-tcp-connections=200 State: Running Started: Sat, 03 May 2025 10:59:23 +0000 Ready: True Restart Count: 0 Requests: cpu: 150m memory: 20Mi Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: Mounts: /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwt9b (ro) sidecar: Container ID: containerd://4f0eec21afa27faa8b26375e09ce8fa1e1a54bc236138d9cdf78f4c3843b9b42 Image: gke.gcr.io/k8s-dns-sidecar:1.23.0-gke.20@sha256:9e60f83b54d010a7dd7e5a868a6713ad410442c72f0b7540cda010c50651c0bc Image ID: gke.gcr.io/k8s-dns-sidecar@sha256:9e60f83b54d010a7dd7e5a868a6713ad410442c72f0b7540cda010c50651c0bc Port: 10054/TCP Host Port: 0/TCP Args: --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV State: Running Started: Sat, 03 May 2025 10:59:25 +0000 Ready: True Restart Count: 0 Requests: cpu: 10m memory: 20Mi Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwt9b (ro) prometheus-to-sd: Container ID: containerd://a4e4c4d269b6621b7e18235ada7478f3d8087bfa418861742dcfe48829a478b7 Image: gke.gcr.io/prometheus-to-sd:v0.11.12-gke.51@sha256:798127b7368b1a3a2851a6a336776739f32b0ed741d5d6ee07b97d6ac2998fa3 Image ID: gke.gcr.io/prometheus-to-sd@sha256:798127b7368b1a3a2851a6a336776739f32b0ed741d5d6ee07b97d6ac2998fa3 Port: Host Port: Command: /monitor --source=kubedns:http://localhost:10054?whitelisted=probe_kubedns_latency_ms,probe_kubedns_errors,probe_dnsmasq_latency_ms,probe_dnsmasq_errors,dnsmasq_misses,dnsmasq_hits --stackdriver-prefix=container.googleapis.com/internal/addons --api-override=https://monitoring.googleapis.com/ --pod-id=$(POD_NAME) --namespace-id=$(POD_NAMESPACE) --v=2 State: Running Started: Sat, 03 May 2025 10:59:28 +0000 Ready: True Restart Count: 0 Environment: POD_NAME: kube-dns-845c6bc884-gqg2f (v1:metadata.name) POD_NAMESPACE: kube-system (v1:metadata.namespace) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwt9b (ro) kubedns-metrics-collector: Container ID: containerd://219697c849c2c43a5627279b4b0c78954ebfd1402c479ef95af8928d66f59293 Image: gke.gcr.io/gke-metrics-collector:20250217_2300_RC0@sha256:b78e39d6a9780ee6e86038727ce45839e6cb2519c836db9cf95cf951ea47ab70 Image ID: gke.gcr.io/gke-metrics-collector@sha256:b78e39d6a9780ee6e86038727ce45839e6cb2519c836db9cf95cf951ea47ab70 Port: Host Port: State: Running Started: Sat, 03 May 2025 10:59:29 +0000 Ready: True Restart Count: 0 Limits: cpu: 1 memory: 30Mi Requests: cpu: 5m memory: 30Mi Environment: GOMAXPROCS: 2 COLLECTOR_CONFIG_PATH: /conf/kubedns-metrics-collector-config-data.textproto SPLIT_GAUGE_BUFFER: true PROJECT_NUMBER: 941969722215 LOCATION: us-east1-d CLUSTER_NAME: xlou-cdm POD_NAMESPACE: kube-system (v1:metadata.namespace) NODE_NAME: (v1:spec.nodeName) POD_NAME: kube-dns-845c6bc884-gqg2f (v1:metadata.name) CONTAINER_NAME: kubedns-metrics-collector COMPONENT_VERSION: (v1:metadata.annotations['components.gke.io/component-version']) COMPONENT_NAME: (v1:metadata.annotations['components.gke.io/component-name']) Mounts: /conf from kubedns-metrics-collector-config-map-vol (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwt9b (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-dns-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-dns Optional: true kubedns-metrics-collector-config-map-vol: Type: ConfigMap (a volume populated by a ConfigMap) Name: kubedns-metrics-collector-config-map Optional: false kube-api-access-fwt9b: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: CriticalAddonsOnly op=Exists components.gke.io/gke-managed-components op=Exists kubernetes.io/arch=arm64:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== I0503 10:59:23.440822 1 main.go:78] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --no-negcache --dns-forward-max=1500 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 --max-ttl=30 --max-cache-ttl=30 --max-tcp-connections=200] true} /etc/k8s/dns/dnsmasq-nanny 10000000000 127.0.0.1:10053} I0503 10:59:23.440962 1 nanny.go:124] Starting dnsmasq [-k --cache-size=1000 --no-negcache --dns-forward-max=1500 --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053 --max-ttl=30 --max-cache-ttl=30 --max-tcp-connections=200] I0503 10:59:23.444620 1 nanny.go:149] W0503 10:59:23.444641 1 nanny.go:150] Got EOF from stdout I0503 10:59:23.444652 1 nanny.go:146] dnsmasq[11]: started, version 2.90 cachesize 1000 I0503 10:59:23.444659 1 nanny.go:146] dnsmasq[11]: compile time options: IPv6 GNU-getopt no-DBus no-UBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset no-nftset auth no-cryptohash no-DNSSEC loop-detect inotify dumpfile I0503 10:59:23.444667 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain cluster.local I0503 10:59:23.444669 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I0503 10:59:23.444671 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I0503 10:59:23.444675 1 nanny.go:146] dnsmasq[11]: reading /etc/resolv.conf I0503 10:59:23.444677 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain cluster.local I0503 10:59:23.444679 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa I0503 10:59:23.444681 1 nanny.go:146] dnsmasq[11]: using nameserver 127.0.0.1#10053 for domain ip6.arpa I0503 10:59:23.444683 1 nanny.go:146] dnsmasq[11]: using nameserver 169.254.169.254#53 I0503 10:59:23.444686 1 nanny.go:146] dnsmasq[11]: read /etc/hosts - 9 names Flag --logtostderr has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components I0503 10:59:25.747872 1 flags.go:57] FLAG: --add-dir-header="false" I0503 10:59:25.747935 1 flags.go:57] FLAG: --alsologtostderr="false" I0503 10:59:25.747938 1 flags.go:57] FLAG: --dnsmasq-addr="127.0.0.1" I0503 10:59:25.747942 1 flags.go:57] FLAG: --dnsmasq-poll-interval-ms="5000" I0503 10:59:25.747946 1 flags.go:57] FLAG: --dnsmasq-port="53" I0503 10:59:25.747949 1 flags.go:57] FLAG: --log-backtrace-at=":0" I0503 10:59:25.747953 1 flags.go:57] FLAG: --log-dir="" I0503 10:59:25.747956 1 flags.go:57] FLAG: --log-file="" I0503 10:59:25.747959 1 flags.go:57] FLAG: --log-file-max-size="1800" I0503 10:59:25.747961 1 flags.go:57] FLAG: --log-flush-frequency="5s" I0503 10:59:25.747965 1 flags.go:57] FLAG: --logtostderr="true" I0503 10:59:25.747967 1 flags.go:57] FLAG: --one-output="false" I0503 10:59:25.747970 1 flags.go:57] FLAG: --probe="[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}]" I0503 10:59:25.747990 1 flags.go:57] FLAG: --prometheus-addr="0.0.0.0" I0503 10:59:25.747993 1 flags.go:57] FLAG: --prometheus-namespace="kubedns" I0503 10:59:25.747996 1 flags.go:57] FLAG: --prometheus-path="/metrics" I0503 10:59:25.747999 1 flags.go:57] FLAG: --prometheus-port="10054" I0503 10:59:25.748001 1 flags.go:57] FLAG: --skip-headers="false" I0503 10:59:25.748004 1 flags.go:57] FLAG: --skip-log-headers="false" I0503 10:59:25.748006 1 flags.go:57] FLAG: --stderrthreshold="2" I0503 10:59:25.748009 1 flags.go:57] FLAG: --v="2" I0503 10:59:25.748011 1 flags.go:57] FLAG: --version="false" I0503 10:59:25.748015 1 flags.go:57] FLAG: --vmodule="" I0503 10:59:25.748031 1 main.go:55] Version v1.23.0-gke.20 I0503 10:59:25.748038 1 server.go:46] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns}) I0503 10:59:25.748061 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} I0503 10:59:25.748099 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} I0503 10:59:28.125546 1 main.go:125] GCE config: &{Project:engineeringpit Zone:us-east1-d Cluster:xlou-cdm ClusterLocation:us-east1-d Instance:gke-xlou-cdm-frontend-18a44ad7-fq4g InstanceId:5890061935972993030} I0503 10:59:28.125593 1 main.go:194] Taking source configs from flags I0503 10:59:28.125603 1 main.go:196] Taking source configs from kubernetes api server I0503 10:59:28.125607 1 main.go:128] Built the following source configs: [0xc0002944e0] I0503 10:59:28.125644 1 main.go:205] Running prometheus-to-sd, monitored target is kubedns http://localhost:10054 E0508 11:19:28.127312 1 stackdriver.go:60] Error while sending request to Stackdriver Post "https://monitoring.googleapis.com/v3/projects/engineeringpit/timeSeries?alt=json&prettyPrint=false": read tcp 10.106.41.4:51128->172.217.203.95:443: read: connection reset by peer {"level":"info","ts":1746269969.5525768,"caller":"collector/main.go:47","msg":"Starting Metrics Collector","log_first_n":2,"log_interval(s)":3600} {"level":"info","ts":1746269969.5535376,"caller":"collector/multi_target_collector.go:50","msg":"Start Metrics Collector","target_url":"http://127.0.0.1:10055/metrics","target_name":"kubedns"} {"level":"info","ts":1746269969.5536416,"caller":"collector/collector.go:111","msg":"Connecting to Cloud Monitoring","target_name":"kubedns","endpoint":"monitoring.googleapis.com:443"} {"level":"error","ts":1746655109.5574338,"caller":"gcm/export.go:498","msg":"Failed to export self-observability metrics to Cloud Monitoring","error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.106.41.4:48050->74.125.134.95:443: read: connection reset by peer","stacktrace":"google3/cloud/kubernetes/metrics/common/gcm/gcm.(*exporter).startSelfObservability\n\tcloud/kubernetes/metrics/common/gcm/export.go:498"} {"level":"error","ts":1746705329.557113,"caller":"gcm/export.go:498","msg":"Failed to export self-observability metrics to Cloud Monitoring","error":"rpc error: code = Unavailable desc = error reading from server: read tcp 10.106.41.4:49830->74.125.141.95:443: read: connection reset by peer","stacktrace":"google3/cloud/kubernetes/metrics/common/gcm/gcm.(*exporter).startSelfObservability\n\tcloud/kubernetes/metrics/common/gcm/export.go:498"} {"level":"error","ts":1747305454.5661855,"caller":"gcm/export.go:498","msg":"Failed to export self-observability metrics to Cloud Monitoring","error":"rpc error: code = Unavailable desc = Visibility check was unavailable. Please retry the request and contact support if the problem persists","stacktrace":"google3/cloud/kubernetes/metrics/common/gcm/gcm.(*exporter).startSelfObservability\n\tcloud/kubernetes/metrics/common/gcm/export.go:498"} {"level":"error","ts":1747312654.5695395,"caller":"gcm/export.go:434","msg":"Failed to export metrics to Cloud Monitoring","error":"rpc error: code = Unavailable desc = Visibility check was unavailable. Please retry the request and contact support if the problem persists","stacktrace":"google3/cloud/kubernetes/metrics/common/gcm/gcm.(*exporter).exportBuffer\n\tcloud/kubernetes/metrics/common/gcm/export.go:434\ngoogle3/cloud/kubernetes/metrics/common/gcm/gcm.(*exporter).flush\n\tcloud/kubernetes/metrics/common/gcm/export.go:383\ngoogle3/cloud/kubernetes/metrics/common/gcm/gcm.(*exporter).Flush\n\tcloud/kubernetes/metrics/common/gcm/export.go:369\ngoogle3/cloud/kubernetes/metrics/components/collector/adapter/adapter.(*adapter).Finalize\n\tcloud/kubernetes/metrics/components/collector/adapter/consume.go:131\ngoogle3/cloud/kubernetes/metrics/components/collector/prometheus/prometheus.(*parser).ParseProto\n\tcloud/kubernetes/metrics/components/collector/prometheus/parse.go:215\ngoogle3/cloud/kubernetes/metrics/components/collector/collector.runScrapeLoop\n\tcloud/kubernetes/metrics/components/collector/collector.go:101\ngoogle3/cloud/kubernetes/metrics/components/collector/collector.Run\n\tcloud/kubernetes/metrics/components/collector/collector.go:79\ngoogle3/cloud/kubernetes/metrics/components/collector/collector.Start.func1\n\tcloud/kubernetes/metrics/components/collector/multi_target_collector.go:58"} {"level":"error","ts":1747312654.5696309,"caller":"collector/collector.go:103","msg":"Failed to process metrics","scrape_target":"http://127.0.0.1:10055/metrics","error":"failed to finalize exporting: \"1 error occurred:\\n\\t* failed to export 1 (out of 1) batches of metrics to Cloud Monitoring\\n\\n\"","stacktrace":"google3/cloud/kubernetes/metrics/components/collector/collector.runScrapeLoop\n\tcloud/kubernetes/metrics/components/collector/collector.go:103\ngoogle3/cloud/kubernetes/metrics/components/collector/collector.Run\n\tcloud/kubernetes/metrics/components/collector/collector.go:79\ngoogle3/cloud/kubernetes/metrics/components/collector/collector.Start.func1\n\tcloud/kubernetes/metrics/components/collector/multi_target_collector.go:58"} I0503 10:59:20.975822 1 flags.go:57] FLAG: --add-dir-header="false" I0503 10:59:20.976690 1 flags.go:57] FLAG: --alsologtostderr="false" I0503 10:59:20.976693 1 flags.go:57] FLAG: --config-dir="/kube-dns-config" I0503 10:59:20.976696 1 flags.go:57] FLAG: --config-map="" I0503 10:59:20.976699 1 flags.go:57] FLAG: --config-map-namespace="kube-system" I0503 10:59:20.976700 1 flags.go:57] FLAG: --config-period="10s" I0503 10:59:20.976703 1 flags.go:57] FLAG: --dns-bind-address="0.0.0.0" I0503 10:59:20.976705 1 flags.go:57] FLAG: --dns-port="10053" I0503 10:59:20.976708 1 flags.go:57] FLAG: --domain="cluster.local." I0503 10:59:20.976710 1 flags.go:57] FLAG: --federations="" I0503 10:59:20.976713 1 flags.go:57] FLAG: --healthz-port="8081" I0503 10:59:20.976715 1 flags.go:57] FLAG: --initial-sync-timeout="1m0s" I0503 10:59:20.976717 1 flags.go:57] FLAG: --kube-master-url="" I0503 10:59:20.976719 1 flags.go:57] FLAG: --kubecfg-file="" I0503 10:59:20.976721 1 flags.go:57] FLAG: --log-backtrace-at=":0" I0503 10:59:20.976727 1 flags.go:57] FLAG: --log-dir="" I0503 10:59:20.976729 1 flags.go:57] FLAG: --log-file="" I0503 10:59:20.976739 1 flags.go:57] FLAG: --log-file-max-size="1800" I0503 10:59:20.976745 1 flags.go:57] FLAG: --log-flush-frequency="5s" I0503 10:59:20.976747 1 flags.go:57] FLAG: --logtostderr="true" I0503 10:59:20.976749 1 flags.go:57] FLAG: --nameservers="" I0503 10:59:20.976750 1 flags.go:57] FLAG: --one-output="false" I0503 10:59:20.976752 1 flags.go:57] FLAG: --profiling="false" I0503 10:59:20.976766 1 flags.go:57] FLAG: --skip-headers="false" I0503 10:59:20.976768 1 flags.go:57] FLAG: --skip-log-headers="false" I0503 10:59:20.976771 1 flags.go:57] FLAG: --stderrthreshold="2" I0503 10:59:20.976773 1 flags.go:57] FLAG: --v="2" I0503 10:59:20.976775 1 flags.go:57] FLAG: --version="false" I0503 10:59:20.976781 1 flags.go:57] FLAG: --vmodule="" I0503 10:59:20.976796 1 dns.go:49] version: 1.23.0-gke.20 I0503 10:59:20.977053 1 server.go:73] Using configuration read from directory: /kube-dns-config with period 10s I0503 10:59:20.977072 1 server.go:126] FLAG: --add-dir-header="false" I0503 10:59:20.977075 1 server.go:126] FLAG: --alsologtostderr="false" I0503 10:59:20.977077 1 server.go:126] FLAG: --config-dir="/kube-dns-config" I0503 10:59:20.977080 1 server.go:126] FLAG: --config-map="" I0503 10:59:20.977081 1 server.go:126] FLAG: --config-map-namespace="kube-system" I0503 10:59:20.977084 1 server.go:126] FLAG: --config-period="10s" I0503 10:59:20.977086 1 server.go:126] FLAG: --dns-bind-address="0.0.0.0" I0503 10:59:20.977088 1 server.go:126] FLAG: --dns-port="10053" I0503 10:59:20.977093 1 server.go:126] FLAG: --domain="cluster.local." I0503 10:59:20.977096 1 server.go:126] FLAG: --federations="" I0503 10:59:20.977098 1 server.go:126] FLAG: --healthz-port="8081" I0503 10:59:20.977100 1 server.go:126] FLAG: --initial-sync-timeout="1m0s" I0503 10:59:20.977103 1 server.go:126] FLAG: --kube-master-url="" I0503 10:59:20.977105 1 server.go:126] FLAG: --kubecfg-file="" I0503 10:59:20.977107 1 server.go:126] FLAG: --log-backtrace-at=":0" I0503 10:59:20.977109 1 server.go:126] FLAG: --log-dir="" I0503 10:59:20.977112 1 server.go:126] FLAG: --log-file="" I0503 10:59:20.977113 1 server.go:126] FLAG: --log-file-max-size="1800" I0503 10:59:20.977116 1 server.go:126] FLAG: --log-flush-frequency="5s" I0503 10:59:20.977118 1 server.go:126] FLAG: --logtostderr="true" I0503 10:59:20.977121 1 server.go:126] FLAG: --nameservers="" I0503 10:59:20.977125 1 server.go:126] FLAG: --one-output="false" I0503 10:59:20.977127 1 server.go:126] FLAG: --profiling="false" I0503 10:59:20.977129 1 server.go:126] FLAG: --skip-headers="false" I0503 10:59:20.977131 1 server.go:126] FLAG: --skip-log-headers="false" I0503 10:59:20.977133 1 server.go:126] FLAG: --stderrthreshold="2" I0503 10:59:20.977136 1 server.go:126] FLAG: --v="2" I0503 10:59:20.977139 1 server.go:126] FLAG: --version="false" I0503 10:59:20.977142 1 server.go:126] FLAG: --vmodule="" I0503 10:59:20.977278 1 server.go:182] Starting SkyDNS server (0.0.0.0:10053) I0503 10:59:20.977421 1 server.go:194] Skydns metrics enabled (/metrics:10055) I0503 10:59:20.977434 1 dns.go:190] Starting endpointsController I0503 10:59:20.977437 1 dns.go:193] Starting serviceController I0503 10:59:20.977494 1 dns.go:186] Configuration updated: {TypeMeta:{Kind: APIVersion:} Federations:map[] StubDomains:map[] UpstreamNameservers:[]} I0503 10:59:20.977525 1 log.go:245] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0] I0503 10:59:20.977553 1 log.go:245] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0] I0503 10:59:21.478187 1 dns.go:224] Initialized services and endpoints from apiserver I0503 10:59:21.478214 1 server.go:150] Setting up Healthz Handler (/readiness) I0503 10:59:21.478231 1 server.go:155] Setting up cache handler (/cache) I0503 10:59:21.478240 1 server.go:136] Status HTTP port 8081