Skip to content

Preventing Kaniko log loss #2152

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions pkg/skaffold/build/cluster/sources/sources.go
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,12 @@ func podTemplate(clusterDetails *latest.ClusterDetails, artifact *latest.KanikoA
},
Resources: resourceRequirements(clusterDetails.Resources),
},
{
Name: "logger-container",
Image: constants.DefaultBusyboxImage,
ImagePullPolicy: v1.PullIfNotPresent,
Command: []string{"sh", "-c", "while [[ $(ps -ef | grep kaniko | wc -l) -gt 1 ]] ; do sleep 1; done; sleep " + clusterDetails.PodGracePeriodSeconds},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm i am a little bit concerned with adding a sleep here. I think the problem you are trying to solve here is when

  • kaniko pod dies and gets cleaned up by kube admin before we could steam logs?

is that correct?
Adding a grace time would not still able to fix this issue what if still dont end up reading them within the grace period.
I think the right approach would be kaniko should write logs to a file to some volume mounted path and we attach a side car container to read the logs and writes. https://kubernetes.io/docs/concepts/cluster-administration/logging/#using-a-sidecar-container-with-the-logging-agent

But again, thats a big change. Not sure what is the correct thing here.

Copy link
Contributor Author

@prary prary May 22, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kaniko pod dies and gets cleaned up by kube admin before we could steam logs?

Exactly

Adding a grace time would not still able to fix this issue what if still dont end up reading them within the grace period.

User can simply increase the grace period which is user configurable.

I think the right approach would be kaniko should write logs to a file to some volume mounted path and we attach a side car container to read the logs and writes.

Yes that could be a big change and is one of the possible solution. We can also make changes in kaniko for storing logs in some volume. Or maybe something even better.

kaniko should write logs to a file to some volume mounted path and we attach a side car container to read the logs and writes.

How we would make sure if all the logs are fetched or still some logs are streaming.

},
},
RestartPolicy: v1.RestartPolicyNever,
Volumes: []v1.Volume{{
Expand Down
1 change: 1 addition & 0 deletions pkg/skaffold/constants/constants.go
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ const (
DefaultKanikoCacheDirMountPath = "/cache"
DefaultKanikoDockerConfigSecretName = "docker-cfg"
DefaultKanikoDockerConfigPath = "/kaniko/.docker"
DefaultKanikoPodGracePeriodSeconds = "2"

DefaultBusyboxImage = "busybox"

Expand Down
5 changes: 5 additions & 0 deletions pkg/skaffold/schema/defaults/defaults.go
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ func Set(c *latest.SkaffoldConfig) error {
setDefaultClusterTimeout,
setDefaultClusterPullSecret,
setDefaultClusterDockerConfigSecret,
setDefaultKanikoPodGracePeriod,
); err != nil {
return err
}
Expand Down Expand Up @@ -248,6 +249,10 @@ func setDefaultKanikoArtifactImage(artifact *latest.Artifact) {
artifact.KanikoArtifact.Image = valueOrDefault(kanikoArtifact.Image, constants.DefaultKanikoImage)
}

func setDefaultKanikoPodGracePeriod(cluster *latest.ClusterDetails) error{
cluster.PodGracePeriodSeconds = valueOrDefault(cluster.PodGracePeriodSeconds, constants.DefaultKanikoPodGracePeriodSeconds)
return nil
}
func valueOrDefault(v, def string) string {
if v != "" {
return v
Expand Down
3 changes: 3 additions & 0 deletions pkg/skaffold/schema/latest/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -275,6 +275,9 @@ type ClusterDetails struct {

// Resources define the resource requirements for the kaniko pod.
Resources *ResourceRequirements `yaml:"resources,omitempty"`

// Wait period for kaniko pod after it finishes of building and pushing the image
PodGracePeriodSeconds string `yaml:podGracePeriodSeconds, omitempty"`
}

// DockerConfig contains information about the docker `config.json` to mount.
Expand Down