1
0
Fork 0
mirror of https://github.com/MillironX/nf-configs.git synced 2024-11-29 10:59:55 +00:00
nf-configs/docs/shh.md

20 lines
1.5 KiB
Markdown
Raw Normal View History

2019-01-07 19:28:31 +00:00
# nf-core/configs: SHH Configuration
All nf-core pipelines have been successfully configured for use on the Department of Archaeogenetic's SDAG/CDAG clusters at the [Max Planck Institute for the Science of Human History (MPI-SHH)](http://shh.mpg.de).
2019-01-07 19:28:31 +00:00
To use, run the pipeline with `-profile shh`. This will download and launch the [`shh.config`](../conf/shh.config) which has been pre-configured with a setup suitable for the SDAG and CDAG clusters. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. The image will currently be centrally stored here:
2019-01-07 19:28:31 +00:00
```bash
/projects1/singularity_scratch/cache/
2019-01-07 19:28:31 +00:00
```
however this will likely change to a read-only directory in the future that will be managed by IT.
2019-10-28 15:09:00 +00:00
This configuration will automatically choose the correct SLURM queue (`short`,`medium`,`long`,`supercruncher`) depending on the time and memory required by each process.
2019-10-28 15:09:00 +00:00
Please note that there is no `supercruncher` queue on CDAG.
2019-01-07 19:28:31 +00:00
>NB: You will need an account and VPN access to use the cluster at MPI-SHH in order to run the pipeline. If in doubt contact IT.
>NB: Nextflow will need to submit the jobs via SLURM to the clusters and as such the commands above will have to be executed on one of the head nodes. If in doubt contact IT.
>NB: The maximum CPUs/Mem are currently adapted for SDAG resource maximums - i.e. will exceed CDAG. Be careful when running larges jobs that error-retries may exceed limits and get 'stuck' in SLURM.