7 Testing Scripts
7.1 Developing and Testing scripts
One of the hard things to understand is what can be run on a compute node versus the head node, and what file systems are accessible via a compute node.
A lot of the issues you might have is because you need to understand the mental model of how cluster computing works. And the best way to understand that is to test your code on a compute node.
Let’s explore how we can do that.
7.1.1 Testing code on a compute node
Fred Hutch users have the advantage of grabnode
, which is a custom command that lets you request an interactive instance of a compute node.
Why would you want to do this? A good part of this is about testing software and making sure that your paths are correct.
7.1.2 Grabbing an interactive shell on a worker
When you’re testing code that’s going to run on a worker node, you need to be aware of what the worker node sees.
It’s also important in estimating how long our tasks are going to run since we can test how long a task runs for a representative dataset.
On a SLURM system, the way to open interactive shells on a node has changed. Check your version first:
srun --version
If you’re on a version before 20.11, you can use srun -i --pty bash
to open an interactive terminal on a worker:
srun -i --pty bash
If the version is past 20.11, we can open an interactive shell on a worker with salloc
.
salloc bash
grabnode
On the FH system, we can use a command called grabnode
, which will let us request a node. It will ask us for our requirements (numbers of cores, memory, etc.) for our node.
tladera2@rhino01:~$ grabnode
grabnode
will then ask us for what kind of instance we want, in terms of CPUs, Memory, and GPUs. Here, I’m grabbing a node with 8 cores, 8 Gb of memory, using it for 1 day, and no GPU.
How many CPUs/cores would you like to grab on the node? [1-36] 8
How much memory (GB) would you like to grab? [160] 8
Please enter the max number of days you would like to grab this node: [1-7] 1
Do you need a GPU ? [y/N]n
You have requested 8 CPUs on this node/server for 1 days or until you type exit.
Warning: If you exit this shell before your jobs are finished, your jobs
on this node/server will be terminated. Please use sbatch for larger jobs.
Shared PI folders can be found in: /fh/fast, /fh/scratch and /fh/secure.
Requesting Queue: campus-new cores: 8 memory: 8 gpu: NONE
srun: job 40898906 queued and waiting for resources
After a little bit, you’ll arrive at a new prompt:
(base) tladera2@gizmok164:~$
Now you can test your batch scripts, in order to make sure your file paths are correct. It is also helpful in profiling your job.
If you’re doing interactive analysis that is going to span over a few days, I recommend that you use screen
or tmux
.
7.2 Working with containers
I think the hardest thing about working with containers is wrapping your head around the indirectness of them. You are running software with its own internal filesystem and the challenges are getting the container to read files in folders/paths outside of its own filesytem, as well as outputting files into those outside folders.
7.2.1 Testing code in a container
In this section, we talk about testing scripts in a container using apptainer
. We use apptainer
(formerly Singularity) in order to run Docker containers on a shared HPC system. This is because Docker itself requires root-level privileges, which is not secure on shared systems.
In order to do our testing, we’ll first pull the Docker container, map our bind point (so our container can access files outside of its file system), and then run scripts in the container.
Even if you aren’t going to frequently use Apptainer in your work, I recommend trying an interactive shell in a container at least once or twice to learn about the container filesystem and conceptually understand how you connect it to the external filesystem.
7.2.2 Pulling a Docker Container
Let’s pull a docker container from the Docker registry. Note we have to specify docker://
when we pull the container, because Apptainer has its own internal format called SIF.
module load Apptainer/1.1.6
apptainer pull docker://biocontainers/samtools:v1.9-4-deb_cv1
7.2.3 Opening a Shell in a Container with apptainer shell
When you’re getting started, opening a shell using Apptainer can help you test out things like filepaths and how they’re accessed in the container. It’s hard to get an intuition for how file I/O works with containers until you can see the limited view from the container.
By default, apptainers can see your current directory and navigate to the files in it.
You can open an Apptainer shell in a container using apptainer shell
. Remember to use docker://
before the container name. For example:
module load Apptainer/1.1.6
apptainer shell docker://biocontainers/samtools:v1.9-4-deb_cv1
This will load the apptainer
module, and then open a Bash shell in the container using apptainer shell
. Once you’re in the container, you can test code, especially seeing whether your files can be seen by the container (see Section 5.1.6). 90% of the issues with using Docker containers has to do with bind paths, so we’ll talk about that next.
Once you’re in the shell, you can take a look at where samtools
is installed:
which samtools
Note that the container filesystem is isolated, and we need to explicitly build connections to it (called bind paths) to get files in and out. We’ll talk more about this in the next section.
Once we’re done testing scripts in our containers, we can exit the shell and get back into the node.
exit
For the most part, due to security reasons, we don’t use docker
on HPC systems. In short, the docker
group essentially has root-level access to the machine, and it’s not a good for security on a shared resource like an HPC.
However, if you have admin level access (for example, on your own laptop), you can open up an interactive shell with docker run -it
:
docker run -it biocontainers/samtools:v1.9-4-deb_cv1 /bin/bash
This will open a bash shell much like apptainer shell
. Note that volumes (the docker equivalent of bind paths) are specified differently in Docker compared to Apptainer.
A major point of failure with Apptainer scripting is when our scripts aren’t using the right bind paths. It becomes even more complicated when you are running multiple steps.
This is one reason we recommend writing WDL Workflows and a (such as or Sprocket) to run your workflows. You don’t have to worry that your bind points are setup correctly, because they are handled by the workflow manager.
7.2.4 Testing in the Apptainer Shell
Ok, now we have a bind point, so now we can test our script in the shell. For example, we can see if we are invoking samtools
in the correct way and that our bind points work.
samtools view -c /mydata/my_bam_file.bam > /mydata/bam_counts.txt
Again, trying out scripts in the container is the best way to understand what the container can and can’t see.
7.2.5 Exiting the container when you’re done
You can exit
, like any shell you open. You should be out of the container. Confirm by using hostname
to make sure you’re out of the container.
7.2.6 Testing outside of the container
Let’s take everything that we learned and put it in a script that we can run on the HPC:
# Script to samtools view -c an input file:
# Usage: ./run_sam.sh <my_bam_file.bam>
# Outputs a count file: my_bam_file.bam.counts.txt
#!/bin/bash
module load Apptainer/1.1.6
apptainer run --bind /fh/fast/mydata:/mydata docker://biocontainers/samtools:v1.9-4-deb_cv1 samtools view -c /mydata/$1 > /mydata/$1.counts.txt
#apptainer cache clean
module purge
We can use this script by the following command:
./run_sam.sh chr1.bam
And it will output a file called chr1.bam.counts.txt
.