Podman

podman is:

… a daemon-less container engine for developing, managing, and running OCI Containers on your Linux System

For the most part podman behaves very similarly to docker, with much of the commands being identical, ie:

  • podman images - lists all images in host repository

  • podman ps - lists all running containers

  • podman run - run image in container (all flags work as well, ie. --rm -it --net=host -v, etc.)

  • etc.

 

Enabling the Podman service

Podman comes with a optional service/API that can be enabled:

user@ubuntu-20-04:~$ systemctl --user enable podman.socket user@ubuntu-20-04:~$ systemctl --user start podman.socket user@ubuntu-20-04:~$ systemctl --user status podman.socket ● podman.socket - Podman API Socket Loaded: loaded (/usr/lib/systemd/user/podman.socket; enabled; vendor preset: enabled) Active: inactive (dead) since Thu 2022-01-06 15:59:44 PST; 16min ago Triggers: ● podman.service Docs: man:podman-system-service(1) Listen: /run/user/1000/podman/podman.sock (Stream) Jan 06 14:32:11 dustin-ThinkPad-T420 systemd[1042]: Listening on Podman API Socket. Jan 06 15:59:44 dustin-ThinkPad-T420 systemd[1042]: podman.socket: Succeeded. Jan 06 15:59:44 dustin-ThinkPad-T420 systemd[1042]: Closed Podman API Socket.

Makes your Podman service accessible by adding the --remote flag to your Podman command. ex. podman --remote run <image> ...

Not sure if it behaves similarly to the docker daemon b/c podman touts itself as a daemon-less tool

Mounting /run/user/1000/podman/podman.sock or /run/podman/podman.sock doesn’t seem to do anything, but further research may be needed

Podman Registry

We run a local docker registry on http://localhost:5050 for internal usage

Podman stores the list of registries in registries.conf:

  • /etc/containers/registries.conf for system wide config

  • $HOME/.config/containers/registries.conf for a single user

Podman will enable these registries by default to allow for short name usage when pulling images

unqualified-search-registries = ['registry.fedoraproject.org', 'registry.access.redhat.com', 'registry.centos.org', 'docker.io']

To enable the local Podman image registry (after creating it):

[[registry]] location="localhost:5000" insecure=true

Running podman in a container

Docker has a feature where you can start a sibling container while in a container simply by mounting -v /var/run/docker.sock:/var/run/docker.sock beforehand

  • It’s useful b/c chimera runs it workflows (or jobs) from within a container itself

Podman doesn’t quite have that feature, but it can be achieved through a workaround:

  • podman isn’t ran as a daemon so mounting a docker.sock file wouldn’t work

  • The user has to be root (but doing more research to see if it can be done without)

 

First activate the root user:

Pull down some images:

Images are now in the repository:

According to oracle’s documentation:

 … images are stored in the /var/lib/containers directory when Podman is run by the root user. For standard users, images are typically stored in $HOME/.local/share/containers/storage/.

So by mounting /var/lib/containers to the container it will give it access to the host (or parent) images

  • for some reason we need to mount /run/netns as well

  • make sure to add the --privileged flag to let your container run podman commands (docs)

If you’re receiving an error similar to this:

ERRO[0000]... error acquiring lock N for volume <image>: file exists

It can be resolved by running podman system renumber (docs)

And the container can now spawn new podman containers:

 

Running podman in a container (rootless)

Method 1:

If you mount the -v $HOME/.local/share/containers/storage:/var/lib/shared to the container it will have access to the host images

Was unable to run a sibling container using this method, please look at method 2

  • more research needed to see if it behaves the same as a rootful user

Method 2:

According to this Github issue, mount the storage directory twice:

inside the docker container able to see all the images (from host) and run a second container from inside

in a new tab was able to see 2 sibling containers (from host) running simultaneously:

Mounting work directories through multiple layers of podman containers

created a directory in /tmp/test with .txt files and when mounting the directory through multiple layers of podman containers the volume can still be accessed

According to podman documentation:

… people intend to use rootless Podman - they want their UID inside and outside the container to match. Thus, we provide the --userns=keep-id flag, which ensures that your user is mapped to its own UID and GID inside the container.

It is also helpful to distinguish between running Podman as a rootless user, and a container which is built to run rootless. If the container you're trying to run has a USER which is not root, then when mounting volumes you must use --userns=keep-id. This is because the container user would not be able to become root and access the mounted volumes.

Changes to HySDS

With HySDS needing the option of supporting both docker and podman (and also singularity) there will be a large refactor required in job_worker.py (source code)

 

Issues:

  • Unable to load the .tar.gz image into podman

    • podman load < /data/work/cache/docker_image.tar.gz works but the < character doesn’t work with subprocess

  • Unable to map user 1011 to into pge-base image (or any image based off of pge-base)

    • same with using the --privileged flag; both do nothing

  • By default if podman runs a container without --user it will run as root

    • was able to run a random docker image w/ mounting /data/work/jobs and able to edit the directory

    • https://www.redhat.com/sysadmin/debug-rootless-podman-mounted-volumes

      • the third solution listed (adding the --userns=keep-id flag) seemed to work well without compromising the host directories

      • second solution changes the ownership of directories in host, makes the celery worker unable to write/create directories

Note: JPL employees can also get answers to HySDS questions at Stack Overflow Enterprise: