Skip to content

User Guide

Tip

If any terminology in this guide is unfamiliar to you, see the glossary.

Available Container Images

The following info-boxes contain information for each available image; this first box describes the images' info-box contents:

Image Name

Description Describes the purpose of this image
Included Software Lists included software. This excludes Linux package manager packages; for that, see:
URI This is the URI for the latest tag of the image; for example, use it as
apptainer pull [URI]
Links
  • Link to the Registry, which lists all other tags for this image; note that you must prefix any URI with docker://
  • Link to the Dockerfile that was used to build this image

base

Description base image with software commonly used by all images
Included Software
ccdb
hipo
qadb
rcdb
URI
docker://codecr.jlab.org/hallb/clas12/container-forge/base:latest
Links

base_root

Description same as base image, but with ROOT
Included Software
ccdb
hipo
qadb
rcdb
root
URI
docker://codecr.jlab.org/hallb/clas12/container-forge/base_root:latest
Links

recon

Description software for reconstruction
Included Software
asprof
ccdb
clara
clas12-config
coatjava
denoise
hipo
qadb
rcdb
URI
docker://codecr.jlab.org/hallb/clas12/container-forge/recon:latest
Links

analysis

Description software for analysis
Included Software
ccdb
clas12-mcgen
clas12root
hipo
iguana
qadb
rcdb
root
URI
docker://codecr.jlab.org/hallb/clas12/container-forge/analysis:latest
Links

Image Tags

We use the following convention for image tag names:

  • the latest tag is the latest version of the image, and may sometimes be unstable
  • tags which start with v are versioned images, and are supposed to be stable, but their software is older than latest
  • tags which start with MR are temporary images, built by developers of this repository; these images are auto-deleted, so do not use them unless you have a reason to

Container Information

Software that we build and install for CLAS12 may be found in a few locations inside a running container:

File Path Description
/usr/local Common installation location for software that produces directories such as bin, include, lib
/opt All other software packages are installed in /opt/<package>

The environment is configured to point to these installations; run env in a container to see.

Use container_info to list the installed software packages and their versions:

container_info          # list all packages
container_info --name   # get the name of the image for this container
container_info --help   # more usage guidance

How to Use Containers

Apptainer is a program that can download images and run containers. See the sections below for an introduction, and refer to Apptainer documentation for more.

Alternatively, you may use Docker or Podman, but we have not yet written documentation here for these.

Apptainer Setup for ifarm

If you are on ifarm, you should adjust some of Apptainer's default directories, to avoid consuming available disk space in certain directories, such as your home directory; you may do so with environment variables:

Environment Variable Description Default
APPTAINER_CACHEDIR image cache $HOME/.apptainer/cache
APPTAINER_TMPDIR temporary folder for Apptainer /tmp

Unfortunately, we do not yet have a good choice for what these variables should be set to on ifarm, but SciComp is working on it. For now, consider using either /volatile:

export APPTAINER_CACHEDIR=/volatile/clas12/users/$LOGNAME/apptainer/cache
export APPTAINER_TMPDIR=/volatile/clas12/users/$LOGNAME/apptainer/tmp
or /scratch (which differs for each computing node, e.g., ifarm2401 or ifarm2402):
export APPTAINER_CACHEDIR=/scratch/$LOGNAME/apptainer/cache
export APPTAINER_TMPDIR=/scratch/$LOGNAME/apptainer/tmp
You should also make sure these directories exist:
mkdir -p $APPTAINER_CACHEDIR $APPTAINER_TMPDIR

Warning

Both /volatile and /scratch auto-remove files after some time, which can cause weird issues, so you may need to wipe these directories if you need to pull an image again; this issue is the reason why these directories are not ideal for this use case.

Good News: Once you have pulled an image and created a .sif file, you do not need either of these directories, since the entire image will be in a single file, which you may store anywhere, e.g, on /work.

Note

You may also need to customize these directories on your personal computer, depending on how your disk is partitioned (e.g., /tmp may be too small).

Pulling an Image

To download an image, use apptainer pull, which will download ("pull") an image and store it in a large (~few GB) .sif file, from which you may then run a container.

For example, let's pull the analysis image, using its latest URI given in the table above:

apptainer pull docker://codecr.jlab.org/hallb/clas12/container-forge/analysis:latest
This will create the file analysis_latest.sif in your current directory, and you may move it wherever you want. All of the software in this image is included within this file.

Running a Container

Running a container may be done with apptainer run:

apptainer run [IMAGE]
where [IMAGE] is the .sif file you have downloaded; alternatively, you can use a URI, which will pull the image and run a container.

Doing this will open a shell in the container for interactive use.

If you want to just use the container to run a program within, pass additional arguments; for example, to run qadb-info (which is found in any container image which has the QADB installed), run:

apptainer run [IMAGE] qadb-info --help

In other words, with apptainer run, you may either:

  1. start an interactive shell inside the container
  2. run container commands from your host computer's shell

Note

If you are submitting ifarm jobs, you'll need to use (2), since the computing nodes are the host computers. Alternatively, use apptainer exec.

Tip

There are two other useful commands to run containers:

apptainer shell [IMAGE]              # run an interactive shell in a container
apptainer exec [IMAGE] [COMMAND]...  # run a command within a container
For further documentation on any apptainer command, use the --help option, or man:
man apptainer-run
man apptainer-shell
man apptainer-exec

Here are some useful options for apptainer run (and for apptainer shell and apptainer exec):

Option Effect
--bind be able to access a host filesystem path from within the container, such as /cache
--cleanenv clean the environment before running container
--fakeroot be root user inside the container
--writable-tmpfs makes the container's filesystem (from /) writeable, but do not persist the changes

Example 1

Run a container interactively, and have access to files on /cache:

apptainer run --bind /cache [IMAGE]

Example 2

If you want to test installing additional packages using pacman (the container's Linux distribution's package manager):

### start a container
apptainer run --cleanenv --fakeroot --writable-tmpfs [IMAGE]
### inside container shell
pacman -Syu         # update packages
pacman -S crystal   # install support for the Crysal programming language
Once you leave the container, your changes will be destroyed, however.

Tip

If there is some common software you would like available in the image(s), ask the maintainers or open a merge request.

Customizing the Interactive Container Shell

When using apptainer run, the default shell for interactive use is bash, for which we apply the following convenience settings:

.bashrc
# aliases
alias ls='ls --color=auto'
alias grep='grep --color=auto'

If you want to use an interactive shell without these settings applied:

apptainer shell [IMAGE]

Note

Different shells are available, such as tcsh, zsh, and fish, which you may either run within an interactive container shell, or start with preferred shell using exec:

apptainer exec [IMAGE] zsh

Danger

Be careful, since by default, these shells may use configuration files from your host home directory, potentially overwriting certain environment variables set in the container image.

See below for more details and how to work around this.

If you prefer your own settings, you may either:

  1. Write your own configuration (rc) file, and use that; for example, if its name is my_rc_file:

    apptainer exec [IMAGE] bash --rcfile my_rc_file
    

  2. Use your host computer's shell settings, which makes the container interactive shell feel just like your host computer's shell; just pass your shell name, e.g., bash:

    apptainer exec [IMAGE] bash
    

    Danger

    This can be very convenient, but you need to be careful: if your host shell's configuration sets environment variables, such as common paths like $LD_LIBRARY_PATH or software-specific variables like $HIPO, doing this will modify the values of these variables in the container using the values from the host.

    Example

    $HIPO in a container points to the image's build of hipo, but if you set $HIPO in your shell's config file to point to your host computer's hipo build, the container will use your host's version of hipo.

    One way to work around this is to start a shell without sourcing configuration:

    bash --norc
    tcsh -f
    zsh --no-rcs
    fish --no-config
    
    So for example, to start zsh:
    apptainer exec [IMAGE] zsh --no-rcs
    

    This approach does not use your shell's custom settings, however; if you really want to use your custom settings, you'll need to go through your configuration files and make use of the environment variable $APPTAINER_CONTAINER, which should only be set to a non-empty value if you are in a container shell. For example, in your .bashrc:

    if [ -z "$APPTAINER_CONTAINER" ]; then
      export HIPO=/path/to/host/hipo
    fi
    

Tip

The default prompt in a container is Apptainer>. You can change it by setting $APPTAINERENV_PS1 on the host computer. For example:

export APPTAINERENV_PS1="$(tput bold)<<<<<<<< \w\n:        [\D{%a} \t] \u@\H\n>>> $(tput sgr0)"
Then,
apptainer shell [IMAGE]
has the following prompt:
<<<<<<<< /path/to/current/directory
:        [Fri 19:15:38] user_name@host_name
>>>
which is designed to be easier to see the separation between subsequent command outputs.