From a97b887d9362f6b63634ea8f3c3e4b5fb9c38eaf Mon Sep 17 00:00:00 2001 From: SPearce Date: Fri, 12 Aug 2022 16:57:04 +0100 Subject: [PATCH] Written a help file --- docs/crukmi.md | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/docs/crukmi.md b/docs/crukmi.md index a55a9fa..609a840 100644 --- a/docs/crukmi.md +++ b/docs/crukmi.md @@ -1,9 +1,15 @@ -# nf-core/configs: BI Configuration +# nf-core/configs: Cancer Research UK Manchester Institute Configuration -All nf-core pipelines have been successfully configured for use at Boehringer Ingelheim. +All nf-core pipelines have been successfully configured for the use on the HPC (phoenix) at Cancer Research UK Manchester Institute. -To use, run the pipeline with `-profile bi`. This will download and launch the [`bi.config`](../conf/bi.config) which has been pre-configured with a setup suitable for the BI systems. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. +To use, run the pipeline with `-profile crukmi`. This will download and launch the [`crukmi.config`](../conf/crukmi.config) which has been pre-configured with a setup suitable for the phoenix HPC. Using this profile, singularity images will be downloaded to run on the cluster. -Before running the pipeline you will need to follow the internal documentation to run Nextflow on our systems. Similar to that, you need to set an environment variable `NXF_GLOBAL_CONFIG` to the path of the internal global config which is not publicly available here. +Before running the pipeline you will need to load Nextflow using the environment module system, for example via: -> NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT. +```bash +## Load Nextflow and Singularity environment modules +module purge +module load apps/nextflow/22.04.5 +``` + +The pipeline should always be executed inside a workspace on the `/scratch/` system. All of the intermediate files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory. \ No newline at end of file