podman/libpod/lock
Ian Wienand 72cf389685 shm_lock: Handle ENOSPC better in AllocateSemaphore
When starting a container libpod/runtime_pod_linux.go:NewPod calls
libpod/lock/lock.go:AllocateLock ends up in here.  If you exceed
num_locks, in response to a "podman run ..." you will see:

 Error: error allocating lock for new container: no space left on device

As noted inline, this error is technically true as it is talking about
the SHM area, but for anyone who has not dug into the source (i.e. me,
before a few hours ago :) your initial thought is going to be that
your disk is full.  I spent quite a bit of time trying to diagnose
what disk, partition, overlay, etc. was filling up before I realised
this was actually due to leaking from failing containers.

This overrides this case to give a more explicit message that
hopefully puts people on the right track to fixing this faster.  You
will now see:

 $ ./bin/podman run --rm -it fedora bash
 Error: error allocating lock for new container: allocation failed; exceeded num_locks (20)

[NO NEW TESTS NEEDED] (just changes an existing error message)

Signed-off-by: Ian Wienand <iwienand@redhat.com>
2021-11-09 18:34:21 +11:00
..
file standardize logrus messages to upper case 2021-09-22 15:29:34 -04:00
shm shm_lock: Handle ENOSPC better in AllocateSemaphore 2021-11-09 18:34:21 +11:00
file_lock_manager.go bump go module to v3 2021-02-22 09:03:51 +01:00
in_memory_locks.go When refreshing after a reboot, force lock allocation 2019-05-06 14:17:54 -04:00
lock.go When refreshing after a reboot, force lock allocation 2019-05-06 14:17:54 -04:00
shm_lock_manager_linux.go bump go module to v3 2021-02-22 09:03:51 +01:00
shm_lock_manager_unsupported.go Add initial version of renumber backend 2019-02-21 10:51:42 -05:00