Skip to content

Skipping Deploy due to error: apply: kubectl apply #1077

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cliffburdick opened this issue Oct 3, 2018 · 4 comments
Closed

Skipping Deploy due to error: apply: kubectl apply #1077

cliffburdick opened this issue Oct 3, 2018 · 4 comments

Comments

@cliffburdick
Copy link
Contributor

cliffburdick commented Oct 3, 2018

Expected behavior

Previous job is deleted

Actual behavior

Error on kubectl apply

Information

Hi, I'm trying to understand if this workflow is even possible, because I'm getting an error when deploying. I'm using skaffold dev to deploy a kind 'Job' from the manifest, that runs to completion with the container saying 'Completed' in kubectl get pods. Once it completes, the container exits and skaffold dev sits there. This is all expected. From another window I trigger an update to the source that skaffold dev notices, and it attempts a redploy. This fails with the following error:

Build complete in 13.958695629s
Starting deploy...
kubectl client version: 1.11
The Job "test-container" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-uid":"eef54fec-c6b2-11e8-a478-246e96111b94", "job-name":"triad"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:core.PodSpec{Volumes:[]core.Volume(nil), InitContainers:[]core.Container(nil), Containers:[]core.Container{core.Container{Name:"test-container", Image:"myrepo/app/uncommitted:c907e53-dirty-7fc2439", Command:[]string{"/src/app"}, Args:[]string{"test"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar(nil), Resources:core.ResourceRequirements{Limits:core.ResourceList{"nvidia.com/gpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}}, Requests:core.ResourceList(nil)}, VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:true}}, RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc428003960), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc425cd0460), ImagePullSecrets:[]core.LocalObjectReference{core.LocalObjectReference{Name:"regcred"}}, Hostname:"", Subdomain:"", Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil)}}: field is immutable
WARN[0132] Skipping Deploy due to error: apply: kubectl apply: exit status 1
Watching for changes...
Starting build...
  • Skaffold version: v0.15.1
  • Operating system: Ubuntu 18.04
  • Contents of skaffold.yaml:
apiVersion: skaffold/v1alpha3
build:
  artifacts:
  - docker:
      buildArgs: {}
      dockerfilePath: Dockerfile
    imageName: myrepo/app/uncommitted
    workspace: .
  tagPolicy:
    gitCommit: {}
deploy:
  kubectl:
    manifests:
    - test_manifest*
kind: Config

Steps to reproduce the behavior

  1. Create kubernetes Job with restartPolicy Never
  2. Run skaffold dev
@r2d4
Copy link
Contributor

r2d4 commented Oct 3, 2018

This will be fixed by #940. You'll need kubectl v1.12.0 or greater for it to work though!

@cliffburdick
Copy link
Contributor Author

@r2d4 thanks!

@cliffburdick
Copy link
Contributor Author

@r2d4 do users need to supply the --force flag manually in skaffold.yaml before this works? I upgraded kubectl to 1.12, but the issue still occurs.

@cliffburdick cliffburdick reopened this Oct 3, 2018
@priyawadhwa
Copy link
Contributor

@cliffburdick once that PR is merged, skaffold will automatically apply the --force flag. You'll just need to update skaffold upon the next release so that this change is included.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants