Skip to content

Fix missing logs when kaniko exists immediately #2352

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 1, 2019

Conversation

cedrickring
Copy link
Contributor

This is my proposed fix for #1978 (as described in #2083)

@prary @priyawadhwa @tejal29

Copy link
Contributor

@dgageot dgageot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels strange to duplicate the pods.GetLogs part but it does the job.

@samos123
Copy link

samos123 commented Jan 3, 2020

It doesn't look like this fix worked. I wasn't able to get any logs running v1.1.0. See below:

$ skaffold dev -v debug
DEBU[0000] validating yamltags of struct KubectlFlags
INFO[0000] Using kubectl context: gke_gsam-123_us-central1_standard-cluster-1
DEBU[0000] Using builder: cluster
DEBU[0000] setting Docker user agent to skaffold-v1.1.0
Listing files to watch...
 - gcr.io/gsam-123/k8s-qos
DEBU[0000] Skipping watch on remote dependency https://download.docker.com/linux/static/stable/x86_64/docker-18.09.9.tgz
DEBU[0000] Found dependencies for dockerfile: [{pkg /go/src/github.com/samos123/k8s-qos/pkg false} {cmd /go/src/github.com/samos123/k8s-qos/cmd false} {tools/getveth.sh /usr/local/bin true}]
INFO[0000] List generated in 4.875125ms
Generating tags...
 - gcr.io/gsam-123/k8s-qos -> DEBU[0000] Running command: [git describe --tags --always]
DEBU[0000] Command output: [73033d9
]
DEBU[0000] Running command: [git status . --porcelain]
DEBU[0000] Command output: [ M skaffold.yaml
?? kaniko-secret.json
]
gcr.io/gsam-123/k8s-qos:73033d9-dirty
INFO[0000] Tags generated in 15.007576ms
Checking cache...
DEBU[0000] Skipping watch on remote dependency https://download.docker.com/linux/static/stable/x86_64/docker-18.09.9.tgz
DEBU[0000] Found dependencies for dockerfile: [{pkg /go/src/github.com/samos123/k8s-qos/pkg false} {cmd /go/src/github.com/samos123/k8s-qos/cmd false} {tools/getveth.sh /usr/local/bin true}]
 - gcr.io/gsam-123/k8s-qos: Not found. Building
INFO[0000] Cache check complete in 1.392808ms
Creating kaniko secret [default/kaniko-secret]...
DEBU[0000] getting client config for kubeContext: ``
DEBU[0000] No pull secret specified. Checking for one in the cluster.
Building [gcr.io/gsam-123/k8s-qos]...
DEBU[0000] Skipping watch on remote dependency https://download.docker.com/linux/static/stable/x86_64/docker-18.09.9.tgz
DEBU[0000] Found dependencies for dockerfile: [{pkg /go/src/github.com/samos123/k8s-qos/pkg false} {cmd /go/src/github.com/samos123/k8s-qos/cmd false} {tools/getveth.sh /usr/local/bin true}]
Storing build context at /tmp/context-78c7ee94b105375101ea5e925bdffa7b.tar.gz
DEBU[0000] getting client config for kubeContext: ``
DEBU[0001] getting client config for kubeContext: ``
INFO[0001] Waiting for kaniko-pd7mj to be initialized
DEBU[0003] Running command: [kubectl --context gke_gsam-123_us-central1_standard-cluster-1 exec -i kaniko-pd7mj -c kaniko-init-container -n default -- tar -xzf - -C /kaniko/buildcontext]
DEBU[0005] Running command: [kubectl --context gke_gsam-123_us-central1_standard-cluster-1 exec kaniko-pd7mj -c kaniko-init-container -n default -- touch /tmp/complete]
INFO[0006] Waiting for kaniko-pd7mj to be complete
DEBU[0007] unable to get kaniko pod logs: container "kaniko" in pod "kaniko-pd7mj" is waiting to start: PodInitializing
DEBU[0008] unable to get kaniko pod logs: container "kaniko" in pod "kaniko-pd7mj" is waiting to start: PodInitializing
FATA[0008] exiting dev mode because first build failed: build failed: build failed: building [gcr.io/gsam-123/k8s-qos]: build artifact: waiting for pod to complete: condition error: pod already in terminal phase: Failed

The only way to view the logs was to really quickly run kubectl get logs while the pods was still running. Even stackdriver didn't have the logs or I just don't know how to use the correct filter :)

A way to reproduce is to create a secret with incorrect service account that causes creating push check transport for gcr.io failed:

@wstrange
Copy link
Contributor

wstrange commented Jan 7, 2020

Running into the same issue as @samos123 (or at least I think I am). There is no output - unless kaniko isnt producing any?

@samos123
Copy link

samos123 commented Jan 8, 2020

Kaniko in my case was producing logs because I saw them by quickly running kubectl logs but it was very hard to time it exactly right and see the logs.

@cedrickring
Copy link
Contributor Author

I assume #3238 reintroduced this. A quick fix would be to call waitForLogs when a pod succeeds and when a pod terminated with an error or timed out.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants