Differences between revisions 12 and 13
Revision 12 as of 2018-02-21 18:38:12
Size: 6736
Comment:
Revision 13 as of 2018-02-21 18:41:33
Size: 6776
Comment:
Deletions are marked like this. Additions are marked like this.
Line 101: Line 101:
 * The '''"standard images" are preliminary'''. If you need additional software or configuration, please let us know.  * The '''"standard images" are probably not final'''. If you need additional software or configuration, please let us know.
Line 109: Line 109:
 While all of those use cases are objectives of the Singularity project, actually making them work hasn't been explored yet. In particular, our standard images currently don't have CUDA installed.  While all of those use cases are objectives of the Singularity project, actually making them work hasn't been explored yet. In particular, our standard images currently don't have CUDA installed - but see the note on CUDA above.

Overview

Singularity allows instantiating an alternative operating system environment on our Linux systems.

In other words:

  • If you need to develop software in a Scientific Linux 7 environment on your Ubuntu 16.04 desktop and then execute it on a Scientific Linux 6 system, Singularity makes that possible.
  • If you need to run software from Ubuntu 16.04 (like an up to date PDF viewer) or other software requiring recent versions of some shared libraries (like the Atom editor) on the ancient SL6, Singularity makes that possible.
  • If you still have software for the discontinued SL5 around and need to run, build, develop or debug it on a supported system, Singularity makes that possible.

And all this without significant overhead. Sounds like fiction? It's fact. Right now. Some caveats do apply though.

Credit: Singularity was and is being developed at Berkeley Lab. Please see http://singularity.lbl.gov for upstream information.

News

  • 2018-02-21
    • Singularity was updated to release 2.4.2 on all supported systems
    • A set of updated standard images was provided - now in squashfs format which makes them even snappier to start and use

Invocation

There are several ways to instantiate a Singularity container and run software in it. The two most important ones are:

  • "Execute" an image
    Singularity images are executable. Executing one of the standard images provided by DV will give you an interactive shell in the container environment:

    [wgs03] ~ % lsb_release -d                     
    Description:    Red Hat Enterprise Linux Server release 6.9 (Santiago)
    [wgs03] ~ % /project/singularity/images/U16.img
    *************************************************************************
    * You are now running a shell in a  Singularity container providing an  *
    * Ubuntu 16.04 environment. Most things should just work as if you had  *
    * just logged in to  such a system, but there are differences.          *
    *                                                                       *
    * For more info, please see https:://dvinfo.zeuthen.desy.de/Singularity *
    *************************************************************************
    [wgs03:U16] ~ % lsb_release -d
    Description:    Ubuntu 16.04.2 LTS
    [wgs03:U16] ~ %
    Note the modified prompt. It's an attempt to make it more transparent in which environment you're currently working.
  • Run an arbitrary command in the context of the image.

    [wgs03] ~ % lsb_release -d
    Description:    Red Hat Enterprise Linux Server release 6.9 (Santiago)
    [wgs03] ~ % singularity exec /project/singularity/images/SL5.img lsb_release -d
    Description:    Scientific Linux release 5.11 (Boron)
    [wgs03] ~ % 

Availability

Singularity containers can be run on any current user accessible Linux system at DESY Zeuthen. In particular, Singularity is available on all

  • Linux Desktops
  • Workgroup Servers
  • Computes Nodes (Farm & Cluster)

The current set of standard images available in /project/singularity/images includes

  • Scientific Linux 5 (SL5.img)

  • Scientific Linux 6 (SL6.img)

  • Scientific Linux 7 (SL7.img)

  • Ubuntu 16.04 LTS (U16.img)

These short image names are symbolic links to the current versions of the standard images for these operating systems. The specific images are planned to be kept around for about a month, but not forever.

Note users can copy a current image to any location to make sure this exact version will remain available.

Also note users can bring their own images and thus don't have to rely on the ones provided by DV. Creating those is rather simple and documented upstream. It currently requires root access though, so cannot be done on supported Linux systems in Zeuthen. But have a look at https://singularity-hub.org .

Since Release 2.3, it's also possible to import docker images without root privileges. Note that the environment variable SINGULARITY_CACHEDIR is used to place downloaded image layers, and it defaults to ~/.singularity. Change the variable to something more sensible to avoid wasting a lot of home space.

Filesystems shared between Host and Container

Subject to availability on the host:

  • $HOME

  • /tmp

  • /afs

  • /lustre

  • /cvmfs

  • /batch

  • /acs (dCache)

  • /net (dCache)

  • /run/user

  • /proc

  • /dev

  • $PWD (the current working directory of the process invoking singularity)

Users can specify additional folders on the host to be bind mounted inside the container when the container is started. This shouldn't be required. But if it is, please see singularity help run for info how to do it - and let DV know why you need to do this.

CUDA support

To run CUDA code in singularity, the CUDA libraries must be available both inside and outside the container. Singularity can automatically bind the needed directories into the container like this:

singularity exec -n -B /etc/OpenCL/vendors yourprogram

Caveats

  • The "standard images" are probably not final. If you need additional software or configuration, please let us know.

  • While we made an effort to provide applications running in a Singularity container with the same set of significant environment variables - and in particular PATH and OS_ARCH should make sense - , environment variables inside the container may differ from expectation. DV can improve here if required.

  • While AFS just works, and the credentials (=tokens) inside a container are just the same as outside, the AFS sysname is always that of the host. There is no way to change this. Which is yet another reason not to rely on AFS sysname magic. To reiterate: using paths like /afs/zeuthen.desy.de/group/software/@sys/bin will not work as expected.

  • Automounted filesystems on SL6 hosts will have to be mounted before the container starts, thus the mount may have to be triggered outside the container. This affects cvmfs, lustre and dCache

  • Some special applications require compatibility between userland (tied to the container environment) and kernel (tied to the host). In particular:
    • CUDA
    • InfiniBand

    • MPI across hosts, especially over InfiniBand

    While all of those use cases are objectives of the Singularity project, actually making them work hasn't been explored yet. In particular, our standard images currently don't have CUDA installed - but see the note on CUDA above.
  • Setuid/Setgid bits have no effect in the container. We haven't found a case where this causes real problems, but it can be surprising. Affected:
    • ping

    • screen (workaround: use tmux instead)

Apptainer (last edited 2024-05-14 10:55:08 by GötzWaschk)