From 65831b73ef4c7e1257350810777f98ef46000d82 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=C3=85shild=20J=2E=20V=C3=A5gene?= <60298098+ashildv@users.noreply.github.com> Date: Thu, 21 Jan 2021 00:17:08 +0100 Subject: [PATCH] Create ceh.md --- docs/ceh.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 docs/ceh.md diff --git a/docs/ceh.md b/docs/ceh.md new file mode 100644 index 0000000..b9c8b4f --- /dev/null +++ b/docs/ceh.md @@ -0,0 +1,21 @@ +# nf-core/configs: Centre for Evolutionary Hologenomics / EvoGenomics (hologenomics partition on HPC) Configuration + +The profile is configured to run with Singularity version 3.6.3-1.el7 which is part of the OS installtion and does not need to be loaded as a module. + +Before running the pipeline you will need to load Java, Miniconda and Nextflow. You can do this by including the commands below in your SLURM/sbatch script: + +```bash +## Load Java, Miniconda and Nextflow environment modules +module purge +module load lib +module load java/v1.8.0_202-jdk miniconda nextflow/v20.07.1.5412 +``` + +All of the intermediate files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway. +The config contains a `cleanup` command that removes the `work/` directory automatically once the pipeline has completeed successfully. If the run does not complete successfully then the `work/` dir should be removed manually to save storage space. + +This configuration will automatically choose the correct SLURM queue (short,medium,long) depending on the time and memory required by each process. + +>NB: You will need an account to use the HPC cluster to run the pipeline. If in doubt contact IT. + +>NB: Nextflow will need to submit the jobs via SLURM to the HPC cluster and as such the commands above will have to be submitted from one of the login nodes.