podman/libpod/lock/shm
Ian Wienand 72cf389685 shm_lock: Handle ENOSPC better in AllocateSemaphore
When starting a container libpod/runtime_pod_linux.go:NewPod calls
libpod/lock/lock.go:AllocateLock ends up in here.  If you exceed
num_locks, in response to a "podman run ..." you will see:

 Error: error allocating lock for new container: no space left on device

As noted inline, this error is technically true as it is talking about
the SHM area, but for anyone who has not dug into the source (i.e. me,
before a few hours ago :) your initial thought is going to be that
your disk is full.  I spent quite a bit of time trying to diagnose
what disk, partition, overlay, etc. was filling up before I realised
this was actually due to leaking from failing containers.

This overrides this case to give a more explicit message that
hopefully puts people on the right track to fixing this faster.  You
will now see:

 $ ./bin/podman run --rm -it fedora bash
 Error: error allocating lock for new container: allocation failed; exceeded num_locks (20)

[NO NEW TESTS NEEDED] (just changes an existing error message)

Signed-off-by: Ian Wienand <iwienand@redhat.com>
2021-11-09 18:34:21 +11:00
..
shm_lock.c codespell: spelling corrections 2019-11-13 08:15:00 +11:00
shm_lock.go shm_lock: Handle ENOSPC better in AllocateSemaphore 2021-11-09 18:34:21 +11:00
shm_lock.h Build cgo files with -Wall -Werror 2019-06-21 10:14:19 +02:00
shm_lock_nocgo.go standardize logrus messages to upper case 2021-09-22 15:29:34 -04:00
shm_lock_test.go Delete prior /dev/shm/* 2020-08-28 09:26:33 -04:00