1
0
Fork 0
mirror of https://github.com/MillironX/nf-configs.git synced 2024-11-24 09:09:56 +00:00

Update mjolnir_globe.md

This commit is contained in:
Åshild J. Vågene 2022-06-02 15:59:11 +02:00 committed by GitHub
parent 940077190b
commit e871dc87f9
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -2,20 +2,12 @@
> **NB:** You will need an account on Mjolnir to run the pipeline. If in doubt contact IT. > **NB:** You will need an account on Mjolnir to run the pipeline. If in doubt contact IT.
Prior to runing the pipeline for the first time, users **must** create a hidden directory called .tmp_eager in their user/work directory where the tmp files from nf-core/eager will be re-directed by the NXF_TEMP command (see below). Prior to running the pipeline for the first time with the `mjolnir_globe.config` (../conf/mjolnir_globe.config), users **must** create a hidden directory called `.tmp_eager` in their data/project directory on Mjolnir where the temp files from `nf-core/eager` will be re-directed by the `NXF_TEMP` command (see below).
```bash The contents of the `.tmp_eager` directory should be periodically deleted manually to save on space.
#navigate into correct directory If the `NXF_TEMP` command is not used to properly re-direct temp files the `/tmp` directory on the compute nodes will be used and quickly filled up, which block anyone from working on these nodes until the offending user removes their files.
cd /maps/projects/mjolnir1/people/$USER
#create .tmp_eager directory. This is a hidden directory. The following lines **must** be added by users to their `~/.bash_profile`:
mkdir .tmp_eager
```
The contents of the .tmp_eager directory should be periodically deleted to save on space.
If the NXF_TEMP command is not used to properly re-direct tmp files the /tmp directory on the compute nodes will be used and quickly filled up, which will make it so that noone can work on these nodes until the files are removed.
The following lines **must** be added by users to their .bash_profile:
```bash ```bash
#re-direct tmp files away from /tmp directories on compute nodes or the headnode #re-direct tmp files away from /tmp directories on compute nodes or the headnode
@ -25,8 +17,7 @@ export NXF_TEMP=/maps/projects/mjolnir1/people/$USER/.tmp_eager
NXF_OPTS='-Xms1g -Xmx4g' NXF_OPTS='-Xms1g -Xmx4g'
``` ```
Once you have created the `.tmp_eager` directory and added the above lines of code to your `.bash_profile` you can run the pipeline.
The profile is configured to run with Singularity
Before running the pipeline you will need to load Java, Miniconda, Singularity and Nextflow. You can do this by including the commands below in your SLURM/sbatch script: Before running the pipeline you will need to load Java, Miniconda, Singularity and Nextflow. You can do this by including the commands below in your SLURM/sbatch script:
@ -36,11 +27,5 @@ module purge
module load jdk/1.8.0_291 miniconda/4.9.2 singularity/3.8.0 nextflow/21.04.1.5556 module load jdk/1.8.0_291 miniconda/4.9.2 singularity/3.8.0 nextflow/21.04.1.5556
``` ```
All of the intermediate files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway. All of the intermediate output files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway.
The config contains a `cleanup` command that removes the `work/` directory automatically once the pipeline has completeed successfully. If the run does not complete successfully then the `work/` dir should be removed manually to save storage space. The `mjolnir_globe` config contains a `cleanup` command that removes the `work/` directory automatically once the pipeline has completeed successfully. If the run does not complete successfully then the `work/` dir should be removed manually to save storage space.
This configuration will automatically choose the correct SLURM queue (short,medium,long) depending on the time and memory required by each process.
> **NB:** Nextflow will need to submit the jobs via SLURM to the HPC cluster and as such the commands above will have to be submitted from one of the login nodes.