flowchart LR
B["External Directory\n/fh/fast/mydata/"]
B --read--> C
C --write--> B
A["Container Filesystem\n/mydata/"]--write-->C("--bind /fh/fast/mydata/:/mydata/")
C --read--> A
10 Testing Scripts
10.1 Developing and Testing scripts
One of the hard things to understand is what can be run on a compute node versus the head node, and what file systems are accessible via a compute node.
A lot of the issues you might have is because you need to understand the mental model of how cluster computing works. And the best way to understand that is to test your code on a compute node.
Let’s explore how we can do that. You should also review the material about using screen (Section 12.7).
10.1.1 Testing code on a compute node
Fred Hutch users have the advantage of grabnode, which is a custom command that lets you request an interactive instance of a compute node. (Non-FH folks can usually request this with the -it flag for srun)
Why would you want to do this? A good part of this is about testing software and making sure that your paths are correct.
10.1.2 Grabbing an interactive shell on a worker
When you’re testing code that’s going to run on a worker node, you need to be aware of what the worker node sees.
It’s also important in estimating how long our tasks are going to run since we can test how long a task runs for a representative dataset.
grabnode
On the FH system, we can use a command called grabnode, which will let us request a node. It will ask us for our requirements (numbers of cores, memory, etc.) for our node.
tladera2@rhino01:~$ grabnodegrabnode will then ask us for what kind of instance we want, in terms of CPUs, Memory, and GPUs. Here, I’m grabbing a node with 8 cores, 8 Gb of memory, using it for 1 day, and no GPU.
How many CPUs/cores would you like to grab on the node? [1-36] 8
How much memory (GB) would you like to grab? [160] 8
Please enter the max number of days you would like to grab this node: [1-7] 1
Do you need a GPU ? [y/N]n
You have requested 8 CPUs on this node/server for 1 days or until you type exit.
Warning: If you exit this shell before your jobs are finished, your jobs
on this node/server will be terminated. Please use sbatch for larger jobs.
Shared PI folders can be found in: /fh/fast, /fh/scratch and /fh/secure.
Requesting Queue: campus-new cores: 8 memory: 8 gpu: NONE
srun: job 40898906 queued and waiting for resources
After a little bit, you’ll arrive at a new prompt:
(base) tladera2@gizmok164:~$
Now you can test your batch scripts, in order to make sure your file paths are correct. It is also helpful in profiling your job.
If you’re doing interactive analysis that is going to span over a few days, I recommend that you use screen or tmux.
10.2 Testing code in a container
In this section, we talk about testing scripts in a container using apptainer. We use apptainer (formerly Singularity) in order to run Docker containers on a shared HPC system. This is because Docker itself requires root-level privileges, which is not secure on shared systems.
In order to do our testing, we’ll first pull the Docker container, map our bind point (so our container can access files outside of its file system), and then run scripts in the container.
Even if you aren’t going to frequently use Apptainer in your work, I recommend trying an interactive shell in a container at least once or twice to learn about the container filesystem and conceptually understand how you connect it to the external filesystem.
I think the hardest thing about working with containers is wrapping your head around the indirectness of them. You are running software with its own internal filesystem and the challenges are getting the container to read files in folders/paths outside of its own filesytem, as well as outputting files into those outside folders.
10.2.1 Pulling a Docker Container
Let’s pull a docker container from the Docker registry. Note we have to specify docker:// when we pull the container, because Apptainer has its own internal format called SIF.
module load Apptainer/1.1.6
apptainer pull docker://biocontainers/samtools:v1.9-4-deb_cv110.2.2 Opening a Shell in a Container with apptainer shell
When you’re getting started, opening a shell using Apptainer can help you test out things like filepaths and how they’re accessed in the container. It’s hard to get an intuition for how file I/O works with containers until you can see the limited view from the container.
By default, apptainers can see your current directory and navigate to the files in it.
You can open an Apptainer shell in a container using apptainer shell. Remember to use docker:// before the container name. For example:
module load Apptainer/1.1.6
apptainer shell docker://biocontainers/samtools:v1.9-4-deb_cv1This will load the apptainer module, and then open a Bash shell in the container using apptainer shell. Once you’re in the container, you can test code, especially seeing whether your files can be seen by the container (see Section 10.2.3). 90% of the issues with using Docker containers has to do with bind paths, so we’ll talk about that next.
Once you’re in the shell, you can take a look at where samtools is installed:
which samtoolsNote that the container filesystem is isolated, and we need to explicitly build connections to it (called bind paths) to get files in and out. We’ll talk more about this in the next section.
Once we’re done testing scripts in our containers, we can exit the shell and get back into the node.
exit10.2.3 Using bind paths in containers
One thing to keep in mind is that every container has its own filesystem. One of the hardest things to wrap your head around for containers is how their filesystems work, and how to access files that are outside of the container filesystem. We’ll call any filesystems outside of the container external filesystems to make the discussion a little easier.
By default, the containers have access to your current working directory. We could make this where our scripts live (such as /home/tladera2/), but because our data is elsewhere, we’ll need to specify that location (/fh/fast/mylab/) as well.
The main mechanism we have in Apptainer to access the external filesystem are bind paths. Much like mounting a drive, we can bind directories from the external filesystem using these bind points.
I think of bind paths as “tunnels” that give access to particular folders in the external filesystem. Once the tunnel is open, we can access data files, process them, and save them using the bind path.
Say my data lives in /fh/fast/mydata/. Then I can specify a bind point in my apptainer shell and apptainer run commands.
We can do this with the --bind option:
apptainer shell --bind /fh/fast/mydata:/mydata docker://biocontainers/samtools:v1.9-4-deb_cv1Note that the bind syntax doesn’t have the trailing slash (/). That is, note that it is:
--bind /fh/fast/mydata: ....
Rather than
--bind /fh/fast/mydata/: ....
Now our /fh/fast/mydata/ folder will be available as /mydata/ in my container. We can read and write files to this bind point. For example, I’d refer to the .bam file /fh/fast/mydata/my_bam_file.bam as:
samtools view -c /mydata/my_bam_file.bam
For the most part, due to security reasons, we don’t use docker on HPC systems. In short, the docker group essentially has root-level access to the machine, and it’s not a good for security on a shared resource like an HPC.
However, if you have admin level access (for example, on your own laptop), you can open up an interactive shell with docker run -it:
docker run -it biocontainers/samtools:v1.9-4-deb_cv1 /bin/bashThis will open a bash shell much like apptainer shell. Note that volumes (the docker equivalent of bind paths) are specified differently in Docker compared to Apptainer.
A major point of failure with Apptainer scripting is when our scripts aren’t using the right bind paths. It becomes even more complicated when you are running multiple steps.
This is one reason we recommend writing WDL Workflows and a (such as or Sprocket) to run your workflows. You don’t have to worry that your bind points are setup correctly, because they are handled by the workflow manager.
10.2.4 Testing in the Apptainer Shell
Ok, now we have a bind point, so now we can test our script in the shell. For example, we can see if we are invoking samtools in the correct way and that our bind points work.
samtools view -c /mydata/my_bam_file.bam > /mydata/bam_counts.txtAgain, trying out scripts in the container is the best way to understand what the container can and can’t see.
10.2.5 Exiting the container when you’re done
You can exit, like any shell you open. You should be out of the container. Confirm by using hostname to make sure you’re out of the container.
10.2.6 Testing outside of the container
Let’s take everything that we learned and put it in a script that we can run on the HPC:
# Script to samtools view -c an input file:
# Usage: ./run_sam.sh <my_bam_file.bam>
# Outputs a count file: my_bam_file.bam.counts.txt
#!/bin/bash
module load Apptainer/1.1.6
apptainer run --bind /fh/fast/mydata:/mydata \
docker://biocontainers/samtools:v1.9-4-deb_cv1 \
samtools view -c /mydata/$1 > /mydata/$1.counts.txt
#apptainer cache clean
module purgeWe can use this script by the following command:
./run_sam.sh chr1.bam
And it will output a file called chr1.bam.counts.txt.