==================================================================================================== ========================================= Pod describe ========================================= ==================================================================================================== Name: overseer-0-b8c87fc4d-w682s Namespace: xlou Priority: 0 Node: gke-xlou-cdm-n2-highcpu-32-4d583d79-39cb/10.142.0.23 Start Time: Thu, 01 Sep 2022 16:43:29 +0000 Labels: app=overseer-0 pod-template-hash=b8c87fc4d release=overseer skaffold.dev/run-id=8857c384-6b0f-44e9-aa5a-e4351236d91c Annotations: kubectl.kubernetes.io/restartedAt: 2022-08-31T20:10:26Z Status: Running IP: 10.0.0.17 IPs: IP: 10.0.0.17 Controlled By: ReplicaSet/overseer-0-b8c87fc4d Containers: overseer: Container ID: containerd://058ef000e1299e38ad1070be78a6c43d858c278f1ade0c06756229668ce23ad3 Image: gcr.io/engineeringpit/lodestar-images/lodestarbox:master-stable Image ID: gcr.io/engineeringpit/lodestar-images/lodestarbox@sha256:26f96cf182d42a4833a451f9c7654b081ae07e4b9a1ed36c3fe516d229b3002a Port: Host Port: Args: /bin/sh -c PYTHONUNBUFFERED=x /lodestar/pyrock/shared/scripts/overseer/run.py State: Running Started: Thu, 01 Sep 2022 16:44:47 +0000 Ready: True Restart Count: 0 Limits: cpu: 24 memory: 24Gi Requests: cpu: 2 memory: 2Gi Environment Variables from: overseer-config-0 ConfigMap Optional: false Environment: Mounts: /results from results (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kw2cp (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: results: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: overseer-0 ReadOnly: false kube-api-access-kw2cp: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: Burstable Node-Selectors: forgerock.io/role=frontend Tolerations: WorkerDedicatedFrontend=true:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 44m default-scheduler 0/2 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) were unschedulable. Warning FailedScheduling 44m default-scheduler 0/2 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) were unschedulable. Normal Scheduled 42m default-scheduler Successfully assigned xlou/overseer-0-b8c87fc4d-w682s to gke-xlou-cdm-n2-highcpu-32-4d583d79-39cb Normal NotTriggerScaleUp 43m cluster-autoscaler pod didn't trigger scale-up: Normal SuccessfulAttachVolume 41m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-aee1a91c-c865-4c02-bfee-89c662f91c87" Normal Pulling 41m kubelet Pulling image "gcr.io/engineeringpit/lodestar-images/lodestarbox:master-stable" Normal Pulled 40m kubelet Successfully pulled image "gcr.io/engineeringpit/lodestar-images/lodestarbox:master-stable" in 1m0.72428244s Normal Created 40m kubelet Created container overseer Normal Started 40m kubelet Started container overseer ==================================================================================================== =========================================== Pod logs =========================================== ==================================================================================================== [01/Sep/2022 17:44:47] httpd web_server : Server is running at http://localhost:8080 [01/Sep/2022 17:44:47] overseer : Seems that overseer was already running, start with order id 95 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [01/Sep/2022 17:44:47] overseer : Waiting for order /results/orders/order.json ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 10.0.0.12 - - [01/Sep/2022 18:25:28] "GET / HTTP/1.1" 200 -