Skip to content

Pod stuck in Terminating if a configmap is mounted with subPath #771

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cezarsa opened this issue Aug 14, 2019 · 5 comments · Fixed by #773
Closed

Pod stuck in Terminating if a configmap is mounted with subPath #771

cezarsa opened this issue Aug 14, 2019 · 5 comments · Fixed by #773
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@cezarsa
Copy link

cezarsa commented Aug 14, 2019

What happened:

When I create a pod which includes a configmap volume and if the volume mount uses a subPath it's not possible delete the Pod, it stays in Terminating forever.
This is a weird one and it only seems to happen on Mac, colleagues on Linux were unable to reproduce, so it's possibly a docker issue.

What you expected to happen:

I expected the Pod to be deleted after a delete call.

How to reproduce it (as minimally and precisely as possible):

With the following yaml as example.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: test-terminating
spec:
  containers:
  - command: ["sleep", "10000000000"]
    image: busybox
    name: test
    volumeMounts:
    - mountPath: /xyz
      name: cmap
      subPath: abc
  volumes:
  - configMap:
      name: test-terminating
    name: cmap
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: test-terminating
data:
  abc: "xyz"

Run:

$ kind create cluster
$ export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
$ kubectl apply -f example.yaml
$ kubectl wait --for=condition=Ready pod test-terminating
$ kubectl delete pod test-terminating # This will hang forever!!!

After a while it's possible to see that the pod is still Terminating:

$ kubectl get pod
NAME               READY   STATUS        RESTARTS   AGE
test-terminating   0/1     Terminating   0          28m

Anything else we need to know?:

The problem only seems to happen on MacOS, on Linux I was unable to reproduce, the pod is successfully deleted.

Environment:

  • kind version: (use kind version):
$ kind version
v0.4.0
  • Kubernetes version: (use kubectl version):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T16:54:35Z", GoVersion:"go1.12.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-25T23:41:27Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
  • Docker version: (use docker info):
$ docker info
Client:
 Debug Mode: false

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 19.03.1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.9.184-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 2
 Total Memory: 1.952GiB
 Name: docker-desktop
 ID: F4T7:DRSI:2M7J:5NHO:M4NR:GQUU:OKLV:KQQI:7CUQ:6XUE:ONRB:4EFF
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 34
  Goroutines: 49
  System Time: 2019-08-14T19:28:34.768985516Z
  EventsListeners: 2
 HTTP Proxy: gateway.docker.internal:3128
 HTTPS Proxy: gateway.docker.internal:3129
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine
  • OS (e.g. from /etc/os-release):
$ sw_vers
ProductName:	Mac OS X
ProductVersion:	10.14.4
BuildVersion:	18E226
@cezarsa cezarsa added the kind/bug Categorizes issue or PR as related to a bug. label Aug 14, 2019
@BenTheElder BenTheElder self-assigned this Aug 14, 2019
@BenTheElder BenTheElder added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Aug 14, 2019
@BenTheElder
Copy link
Member

I can replicate this and see the error.

Aug 14 22:54:21 kind-control-plane kubelet[215]: E0814 22:54:21.251200 215 nestedpendingoperations.go:270] Operation for ""kubernetes.io/configmap/69d410fd-a144-435f-a1cb-7b7d9ce109bc-cmap" ("69d410fd-a144-435f-a1cb-7b7d9ce109bc")" failed. No retries permitted until 2019-08-14 22:56:23.251137 +0000 UTC m=+391.229059401 (durationBeforeRetry 2m2s). Error: "error cleaning subPath mounts for volume "cmap" (UniqueName: "kubernetes.io/configmap/69d410fd-a144-435f-a1cb-7b7d9ce109bc-cmap") pod "69d410fd-a144-435f-a1cb-7b7d9ce109bc" (UID: "69d410fd-a144-435f-a1cb-7b7d9ce109bc") : error processing /var/lib/kubelet/pods/69d410fd-a144-435f-a1cb-7b7d9ce109bc/volume-subpaths/cmap/test: error cleaning subpath mount /var/lib/kubelet/pods/69d410fd-a144-435f-a1cb-7b7d9ce109bc/volume-subpaths/cmap/test/0: Unmount failed: exit status 32\nUnmounting arguments: /var/lib/kubelet/pods/69d410fd-a144-435f-a1cb-7b7d9ce109bc/volume-subpaths/cmap/test/0\nOutput: umount: /var/lib/kubelet/pods/69d410fd-a144-435f-a1cb-7b7d9ce109bc/volume-subpaths/cmap/test/0: not mounted.\n\n"

@aojea
Copy link
Contributor

aojea commented Aug 15, 2019

Are there more logs associated? journalctl | grep cmap

What happens if you try to unmount it manually ?

mount | grep cmap (obtain mount point)
umount -v /var/lib/kubelet/pods/UUID/volume-subpaths/cmap/test/0

@BenTheElder
Copy link
Member

I'm pretty sure I know what this is, just ran out of time to patch it yesterday and then today we've had some upstream things to patch :-)

There are more logs yes, but that's the relevant part. If you run umount manually you won't get a different result. This is a bug in kind.

@BenTheElder
Copy link
Member

This should be fixed (and probably other things!) with the latest version, please let me know if you see this again. We'll be cutting a new release soon but need to check on some unrelated issues 😅

@BenTheElder
Copy link
Member

Also: Thanks for reporting this, and for the detailed report and minimal reproducer!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants