Currently if the host shares container storage with a container
running podman, the podman inside of the container resets the
storage on the host. This can cause issues on the host, as
well as causes the podman command running the container, to
fail to unmount /dev/shm.
podman run -ti --rm --privileged -v /var/lib/containers:/var/lib/containers quay.io/podman/stable podman run alpine echo hello
* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy
* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy
Since podman is volume mounting in the graphroot, it will add a flag to
/run/.containerenv to tell podman inside of container whether to reset storage or not.
Since the inner podman is running inside of the container, no reason to assume this is a fresh reboot, so if "container" environment variable is set then skip
reset of storage.
Also added tests to make sure /run/.containerenv is runnig correctly.
Fixes: https://github.com/containers/podman/issues/9191
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Make sure to not set an empty $HOME for containers and let it default to
"/".
https://github.com/containers/crun/pull/599 is required to fully
address #9378.
Partially-Fixes: #9378
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The `images/create` endpoint should always attempt to pull a newer
image. Previously, the local images was used which is not compatible
with Docker and caused issues in the Gitlab CI.
Fixes: #9232
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
when creating a pod with --infra-image and using a untagged image for
the infra-image (none/none), the lookup for the image's name was
creating a panic.
Fixes: #9374
Signed-off-by: baude <bbaude@redhat.com>
Make sure that Podman's default OCI runtime is passed to Buildah in
`podman build`. In theory, Podman and Buildah should use the same
defaults but the projects move at different speeds and it turns out
we caused a regression in v3.0.
Fixes: #9365
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The error message when failing to create an image engine unconditionally
pointed to the Podman socket which is quite confusing when running
locally.
Move the error message to the point where the first ping to the service
fails.
[NO TESTS NEEDED]
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
When we stop a container we are printing the full id,
this does not match Docker behaviour or the start behavior.
We should be printing the users rawInput when we successfully
stop the container.
Fixes: https://github.com/containers/podman/issues/9386
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
Currently podman is always chowning the WORKDIR to root:root
This PR will return if the WORKDIR already exists.
Fixes: https://github.com/containers/podman/issues/9387
Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
The unit generation accidentally escaped the %t in the pod id file path.
This is a regression caused by #9178. This was not caught by the tests
because the test itself was wrong. It used a full path instead of the
systemd variable %t like the actual code does.
Fixes#9373
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
The logic in the e2e test for multiple network aliases is indicating the
test should wait for the containerized nginx to be ready. As this may
take some time, the test does an exponential backoff starting at 2050ms.
Fix the logic by removing the `Expect(...)` call during the exponential
backoff. Otherwise, the test errors immediately.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
The timestamps of some images must have changed changing the number of
expected filtered images. The test conditions seem fragile but for now
it's more important to get CI back.
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
When the query decoding fails at the beginning of WaitContainerLibpod(),
the Error() sets the header but doesn't returns after that.
This causes the execution flow to reach the WriteResponse() at the end
of WaitContainerLibpod(), which attempts to set another header, thus
causing the following error:
http: superfluous response.WriteHeader call from github.com/containers/podman/pkg/api/handlers/utils.WriteResponse (handler.go:124)
[NO TESTS NEEDED]
Signed-off-by: Nikolay Edigaryev <edigaryev@gmail.com>
Cleanup the golangci.yml file and enable more linters.
`pkg/spec` and `iopodman.io` is history. The vendor directory
is excluded by default. The dependencies dir was listed twice.
Fix the reported problems in `pkg/specgen` because that was also
excluded by `pkg/spec`.
Enable the structcheck, typecheck, varcheck, deadcode and depguard
linters.
[NO TESTS NEEDED]
Signed-off-by: Paul Holzinger <paul.holzinger@web.de>
Internally, Podman constructs a tree of layers in containers/storage to
quickly compute relations among layers and hence images. To compute the
tree, we intersect all local layers with all local images. So far,
lookup errors have been fatal which has turned out to be a mistake since
it seems fairly easy to cause storage corruptions, for instance, when
killing builds. In that case, a (partial) image may list a layer which
does not exist (anymore). Since the errors were fatal, there was no
easy way to clean up and many commands were erroring out.
To improve usability, turn the fatal errors into warnings that guide the
user into resolving the issue. In this case, a `podman system reset`
may be the approriate way for now.
[NO TESTS NEEDED] because I have no reliable way to force it.
[1] https://github.com/containers/podman/issues/8148#issuecomment-778253474
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>