web-dev-qa-db-ger.com

Kubernetes Pod schlägt mit CrashLoopBackOff fehl

Ich folge dieser Anleitung , um einen Pod mit minikube einzurichten und ein Bild aus einem privaten Repository zu ziehen, das unter hub.docker.com gehostet wird

Beim Versuch, einen Pod zum Ziehen des Bildes einzurichten, wird "CrashLoopBackoff" angezeigt.

pod config:

apiVersion: v1
kind: Pod
metadata:
  name: private-reg
spec:
  containers:
    - name: private-reg-container
      image: ha/prod:latest
  imagePullSecrets:
    - name: regsecret

Ausgabe von "get pod"

kubectl get pod private-reg
NAME          READY     STATUS             RESTARTS   AGE
private-reg   0/1       CrashLoopBackOff   5          4m

Soweit ich sehen kann, gibt es kein Problem mit den Bildern und wenn ich sie manuell ziehe und sie laufen lasse, funktionieren sie.

(Sie können sehen "Erfolgreich gezogenes Bild" ha/prod: latest ")

dieses Problem tritt auch auf, wenn ich ein generisches Image wie Centos in das Repository schiebe und versuche, es mit dem Pod abzurufen und auszuführen.

Auch das Geheimnis scheint gut zu funktionieren und ich kann die "Pulls" sehen, die im privaten Repository gezählt werden.

Hier ist die Ausgabe des Befehls:

kubectl beschreibt pods private-reg:

[~]$ kubectl describe pods private-reg
Name:       private-reg
Namespace:  default
Node:       minikube/192.168.99.100
Start Time: Thu, 22 Jun 2017 17:13:24 +0300
Labels:     <none>
Annotations:    <none>
Status:     Running
IP:     172.17.0.5
Controllers:    <none>
Containers:
  private-reg-container:
    Container ID:   docker://1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132
    Image:      ha/prod:latest
    Image ID:       docker://sha256:7335859e2071af518bcd0e2f373f57c1da643bb37c7e6bbc125d171ff98f71c0
    Port:       
    State:      Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 01 Jan 0001 00:00:00 +0000
      Finished:     Thu, 22 Jun 2017 17:20:04 +0300
    Ready:      False
    Restart Count:  6
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-bhvgz (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  default-token-bhvgz:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-bhvgz
    Optional:   false
QoS Class:  BestEffort
Node-Selectors: <none>
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubObjectPath               Type        Reason      Message
  --------- --------    -----   ----            -------------               --------    ------      -------
  9m        9m      1   default-scheduler                       Normal      Scheduled   Successfully assigned private-reg to minikube
  8m        8m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal      Created     Created container with id 431fecfd1d2ca03d29fd88fd6c663e66afb59dc5e86487409002dd8e9987945c
  8m        8m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal      Started     Started container with id 431fecfd1d2ca03d29fd88fd6c663e66afb59dc5e86487409002dd8e9987945c
  8m        8m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal      Started     Started container with id 223e6af99bb950570a27056d7401137ff9f3dc895f4f313a36e73ef6489eb61a
  8m        8m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal      Created     Created container with id 223e6af99bb950570a27056d7401137ff9f3dc895f4f313a36e73ef6489eb61a
  8m        8m      2   kubelet, minikube                       Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 10s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"

  8m    8m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Started     Started container with id a98377f9aedc5947fe1dd006caddb11fb48fa2fd0bb06c20667e0c8b83a3ab6a
  8m    8m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Created     Created container with id a98377f9aedc5947fe1dd006caddb11fb48fa2fd0bb06c20667e0c8b83a3ab6a
  8m    8m  2   kubelet, minikube                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 20s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"

  8m    8m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Started     Started container with id 261f430a80ff5a312bdbdee78558091a9ae7bc9fc6a9e0676207922f1a576841
  8m    8m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Created     Created container with id 261f430a80ff5a312bdbdee78558091a9ae7bc9fc6a9e0676207922f1a576841
  8m    7m  3   kubelet, minikube                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 40s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"

  7m    7m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Created     Created container with id 7251ab76853d4178eff59c10bb41e52b2b1939fbee26e546cd564e2f6b4a1478
  7m    7m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Started     Started container with id 7251ab76853d4178eff59c10bb41e52b2b1939fbee26e546cd564e2f6b4a1478
  7m    5m  7   kubelet, minikube                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"

  5m    5m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Created     Created container with id 347868d03fc9730417cf234e4c96195bb9b45a6cc9d9d97973855801d52e2a02
  5m    5m  1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Started     Started container with id 347868d03fc9730417cf234e4c96195bb9b45a6cc9d9d97973855801d52e2a02
  5m    3m  12  kubelet, minikube                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"

  9m    2m      7   kubelet, minikube   spec.containers{private-reg-container}  Normal  Pulling     pulling image "ha/prod:latest"
  2m    2m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Started     Started container with id 1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132
  8m    2m      7   kubelet, minikube   spec.containers{private-reg-container}  Normal  Pulled      Successfully pulled image "ha/prod:latest"
  2m    2m      1   kubelet, minikube   spec.containers{private-reg-container}  Normal  Created     Created container with id 1aad64750d0ba9ba826fe4f12c8814f7db77293078f8047feec686fcd8f90132
  8m    <invalid>   40  kubelet, minikube   spec.containers{private-reg-container}  Warning BackOff     Back-off restarting failed container
  2m    <invalid>   14  kubelet, minikube                       Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "private-reg-container" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"

Hier ist die Ausgabe des Befehls:

kubectl --v = 8 logs private-reg

I0622 17:35:01.043739   15981 cached_discovery.go:71] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/apps/v1beta1/serverresources.json
I0622 17:35:01.043951   15981 cached_discovery.go:71] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/v1/serverresources.json
I0622 17:35:01.045061   15981 cached_discovery.go:118] returning cached discovery info from /home/demo/.kube/cache/discovery/192.168.99.100_8443/servergroups.json
I0622 17:35:01.045175   15981 round_trippers.go:395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/private-reg
I0622 17:35:01.045182   15981 round_trippers.go:402] Request Headers:
I0622 17:35:01.045187   15981 round_trippers.go:405]     Accept: application/json, */*
I0622 17:35:01.045191   15981 round_trippers.go:405]     User-Agent: kubectl/v1.6.6 (linux/AMD64) kubernetes/7fa1c17
I0622 17:35:01.072863   15981 round_trippers.go:420] Response Status: 200 OK in 27 milliseconds
I0622 17:35:01.072900   15981 round_trippers.go:423] Response Headers:
I0622 17:35:01.072921   15981 round_trippers.go:426]     Content-Type: application/json
I0622 17:35:01.072930   15981 round_trippers.go:426]     Content-Length: 2216
I0622 17:35:01.072936   15981 round_trippers.go:426]     Date: Thu, 22 Jun 2017 14:35:31 GMT
I0622 17:35:01.072994   15981 request.go:991] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"private-reg","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/private-reg","uid":"f4340638-5754-11e7-978a-08002773375c","resourceVersion":"3070","creationTimestamp":"2017-06-22T14:13:24Z"},"spec":{"volumes":[{"name":"default-token-bhvgz","secret":{"secretName":"default-token-bhvgz","defaultMode":420}}],"containers":[{"name":"private-reg-container","image":"ha/prod:latest","resources":{},"volumeMounts":[{"name":"default-token-bhvgz","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"minikube","securityContext":{},"imagePullSecrets":[{"name":"regsecret"}],"schedulerName":"default-scheduler"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z","reason":"ContainersNotReady","message":"containers with unready status: [private-reg-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2017-06-22T14:13:24Z"}],"hostIP":"192.168.99.100","podIP":"172.17.0.5","startTime":"2017-06-22T14:13:24Z","containerStatuses":[{"name":"private-reg-container","state":{"waiting":{"reason":"CrashLoopBackOff","message":"Back-off 5m0s restarting failed container=private-reg-container pod=private-reg_default(f4340638-5754-11e7-978a-08002773375c)"}},"lastState":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":null,"finishedAt":"2017-06-22T14:30:36Z","containerID":"docker://a4cb436a79b0b21bb385e544d424b2444a80ca66160ef21af30ab69ed2e23b32"}},"ready":false,"restartCount":8,"image":"ha/prod:latest","imageID":"docker://sha256:7335859e2071af518bcd0e2f373f57c1da643bb37c7e6bbc125d171ff98f71c0","containerID":"docker://a4cb436a79b0b21bb385e544d424b2444a80ca66160ef21af30ab69ed2e23b32"}],"qosClass":"BestEffort"}}
I0622 17:35:01.074108   15981 round_trippers.go:395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/private-reg/log
I0622 17:35:01.074126   15981 round_trippers.go:402] Request Headers:
I0622 17:35:01.074132   15981 round_trippers.go:405]     Accept: application/json, */*
I0622 17:35:01.074137   15981 round_trippers.go:405]     User-Agent: kubectl/v1.6.6 (linux/AMD64) kubernetes/7fa1c17
I0622 17:35:01.079257   15981 round_trippers.go:420] Response Status: 200 OK in 5 milliseconds
I0622 17:35:01.079289   15981 round_trippers.go:423] Response Headers:
I0622 17:35:01.079299   15981 round_trippers.go:426]     Content-Type: text/plain
I0622 17:35:01.079307   15981 round_trippers.go:426]     Content-Length: 0
I0622 17:35:01.079315   15981 round_trippers.go:426]     Date: Thu, 22 Jun 2017 14:35:31 GMT

Wie kann ich dieses Problem beheben?

  • Update Die Ausgabe von:

  • kubectl --v = 8 Protokolle ps-agent-2028336249-3pk43 --namespace = default -p

    I0625 11: 30: 01.569903 13420 round_trippers.go: 395] GET I0625 11: 30: 01.569920 13420 round_trippers.go: 402] Request Headers: I0625 11: 30: 01.569927 13420 round_trippers.go: 405] User-Agent: kubectl/v1 .6.6 (Linux/AMD64) kubernetes/7fa1c17 I0625 11: 30: 01.569934 13420 round_trippers.go: 405] Akzeptieren: application/json, / I0625 11: 30: 01.599026 13420 round_trippers.go: 420] Antwort Status: 200 OK in 29 Millisekunden I0625 11: 30: 01.599048 13420 round_trippers.go: 423] Antwortheader: I0625 11: 30: 01.599056 13420 round_trippers.go: 426] Datum: So, 25. Juni 2017, 08:30:01 GMT I0625 11: 30: 01.599062 13420 round_trippers.go: 426] Inhaltstyp: application/json I0625 11: 30: 01.599069 13420 round_trippers.go: 426] Inhaltslänge: 2794 I0625 11: 30: 01.599264 13420 request.go: 991] Antworttext: {"kind": "Pod", "apiVersion": "v1", "Metadaten": {"name": "ps-agent-2028336249-3pk43", "generateName": "ps-agent-2028336249- "," Namespace ":" Standard "," SelfLink ":"/api/v1/Namespaces/Standard/Pods/ps-Agent-2028336249-3pk43 "," UID ": "87c69072-597e-11e7-83cd-08002773375c", "resourceVersion": "14354", "creationTimestamp": "2017-06-25T08: 16: 03Z", "labels": {"pod-template-hash": " 2028336249 "," run ":" ps-agent "}," annotations ": {" kubernetes.io/created-by ": {" kind ":" SerializedReference "," apiVersion ":\"v1", "reference": {"kind": "ReplicaSet", "namespace": "default", "name": "ps-agent-2028336249" "uid": "87c577b5-597e-11e7-83cd-08002773375c", "apiVersion": "extensions", "resourceVersion": "13446"} " }, "ownerReferences": [{"apiVersion": "extensions/v1beta1", "kind": "ReplicaSet", "name": "ps-agent-2028336249", "uid": "87c577b5-597e-11e7-83cd -08002773375c "," controller ": true," blockOwnerDeletion ": true}]}," spec ": {" volume ": [{" name ":" default-token-bhvgz "," secret ": {" secretName " : "default-token-bhvgz", "defaultMode": 420}}], "container": [{"name": "ps-agent", "image": "ha/prod: ps-agent-latest", "resources": {}, "volumeMounts": [{"name": "default-token-bhvgz", "readOnly": true, "mountPath": "/ var/run/secrets/kubernetes.io/serviceaccount"} ], "te rminationMessagePath ":"/dev/ending-log "," endingMessagePolicy ":" File "," imagePullPolicy ":" IfNotPresent "}," restartPolicy ":" Always "," endingGracePeriodSeconds ": 30," dnsPolicy ":" ClusterFirst "," serviceAccountName ":" default "," serviceAccount ":" default "," nodeName ":" minikube "," securityContext ": {}," schedulerName ":" default-scheduler "}," status ": {" Phase ":" Running "," conditions ": [{" type ":" Initialized "," status ":" True "," lastProbeTime ": null," lastTransitionTime ":" 2017-06-25T08: 16: 03Z " }, {"type": "Ready", "status": "False", "lastProbeTime": null, "lastTransitionTime": "2017-06-25T08: 16: 03Z", "reason": "ContainersNotReady", " message ":" container with unready status: [ps-agent] "}, {" type ":" PodScheduled "," status ":" True "," lastProbeTime ": null," lastTransitionTime ":" 2017-06-25T08 : 16: 03Z "}]," hostIP ":" 192.168.99.100 "," podIP ":" 172.17.0.5 "," startTime ":" 2017-06-25T08: 16: 03Z "," containerStatuses ": [{ "name": "ps-agent", "state": {"waiting": {"reason": "CrashLoopBackOff", "message": "Zurücksetzen 5m0s Neustart fehlgeschlagener Container = ps -agent pod = ps-agent-2028336249-3pk43_default (87c69072-597e-11e7-83cd-08002773375c) "}}," lastState ": {" terminated ": {" exitCode ": 0," reason ":" Completed ", "startedAt": null, "finishedAt": "2017-06-25T08: 27: 17Z", "containerID": "docker: // 1aa9dfbfeb80042c6f4c8d04cabb3306ac1cd52963568e621019e2f1f0ee081b"}, "readyC": " ":" ha/prod: ps-agent-latest“, "ImageID": "docker: // sha256: eb5307c4366fc129d022703625a5f30ff175b5e1a24dbe39fd4c32e726a0ee7b", "containerID": "docker: // 1aa9dfbfeb80042c6f4c8d04cabb3306ac1cd52963568e621019e2f1f0ee081b"}] "qosClass": "BestEffort"} } I0625 11: 30: 01.600727 13420 round_trippers.go: 395] GET https://192.168.99.100:8443/api/v1/namespaces/default/pods/ps-agent-2028336249-3pk43/log?previous=true I0625 11: 30: 01.600747 13420 round_trippers.go: 402] Kopfzeilen anfordern: I0625 11: 30: 01.600757 13420 round_trippers.go: 405] Akzeptieren: application/json, / I0625 11: 30: 01.600766 13420 round_trippers.go: 405] User-Agent: kubectl/v1.6.6 (Linux/AMD64) kubernetes/7fa1c17 I0 625 11: 30: 01.632473 13420 round_trippers.go: 420] Antwortstatus: 200 OK in 31 Millisekunden I0625 11: 30: 01.632545 13420 round_trippers.go: 423] Antwort-Header: I0625 11: 30: 01.632569 13420 round_trippers.go: 426] Datum: So, 25. Juni 2017 08:30:01 GMT I0625 11: 30: 01.632592 13420 round_trippers.go: 426] Inhaltstyp: Text/Plain I0625 11: 30: 01.632615 13420 round_trippers.go: 426] Inhaltslänge: 0

13
haim ari

Das Problem wird durch den Docker-Container verursacht, der beendet wird, sobald der Startvorgang abgeschlossen ist. Ich fügte einen Befehl hinzu, der für immer läuft und es funktionierte. Diese Ausgabe erwähnt hier

12
haim ari

Ich hatte ein ähnliches Problem wie "CrashLoopBackOff", als ich das Abrufen von Pods und Logs von Pods debugierte. Ich habe herausgefunden, dass meine Befehlsargumente falsch sind

0
Venu

Ich bin auf den gleichen Fehler gestoßen.

NAME READY STATUS RESTARTS ALTER 
 Pod/webapp 0/1 CrashLoopBackOff 5 47h 

Mein Problem war, dass ich versuchte, zwei verschiedene Pods mit demselben Metadatennamen auszuführen.

art: Pod Metadaten: name: webapp Etiketten: ...

Um alle Namen Ihrer Pods zu finden, führen Sie Folgendes aus: Kubectl erhält Pods

NAME READY STATUS RESTARTS ALTER 
 Webapp 1/1 Läuft 15 47h 

dann änderte ich den widersprüchlichen pod-namen und alles funktionierte gut.

NAME READY STATUS RESTARTS ALTER 
 Webapp 1/1 Läuft 17 2d 
 Webapp-release-0-5 1/1 Läuft 0 13m 
0
Rootdevelopper