mirror of
https://github.com/MillironX/nf-configs.git
synced 2024-11-22 00:26:03 +00:00
Merge pull request #358 from aidaanva/master
Update eva.config bigmem.q
This commit is contained in:
commit
9e502025b9
2 changed files with 8 additions and 3 deletions
|
@ -36,7 +36,7 @@ profiles {
|
|||
}
|
||||
|
||||
process {
|
||||
queue = 'archgen.q'
|
||||
queue = { task.memory > 700.GB ? 'bigmem.q' : 'archgen.q' }
|
||||
clusterOptions = { "-S /bin/bash -V -j y -o output.log -l h_vmem=${task.memory.toGiga()}G" }
|
||||
}
|
||||
|
||||
|
|
|
@ -4,7 +4,11 @@ All nf-core pipelines have been successfully configured for use on the Departmen
|
|||
|
||||
To use, run the pipeline with `-profile eva`. You can further with optimise submissions by specifying which cluster queue you are using e,g, `-profile eva,archgen`. This will download and launch the [`eva.config`](../conf/eva.config) which has been pre-configured with a setup suitable for the `all.q` queue. The number of parallel jobs that run is currently limited to 8.
|
||||
|
||||
Using this profile, a docker image containing all of the required software will be downloaded, and converted to a `singularity` image before execution of the pipeline. The image will currently be centrally stored here:
|
||||
Using this profile, a docker image containing all of the required software will be downloaded, and converted to a `singularity` image before execution of the pipeline.
|
||||
|
||||
Institute-specific pipeline profiles exists for:
|
||||
|
||||
- eager
|
||||
|
||||
## Additional Profiles
|
||||
|
||||
|
@ -16,9 +20,10 @@ If you specify `-profile eva,archgen` you will be able to use the nodes availabl
|
|||
|
||||
Note the following characteristics of this profile:
|
||||
|
||||
- By default, job resources are assigned a maximum number of CPUs of 32, 256 GB maximum memory and 720.h maximum wall time.
|
||||
- By default, job resources are assigned a maximum number of CPUs of 32, 256 GB maximum memory and 365 day maximum wall time.
|
||||
- Using this profile will currently store singularity images in a cache under `/mnt/archgen/users/singularity_scratch/cache/`. All archgen users currently have read/write access to this directory, however this will likely change to a read-only directory in the future that will be managed by the IT team.
|
||||
- Intermediate files will be _automatically_ cleaned up (see `debug` below if you don't want this to happen) on successful run completion.
|
||||
- Jobs submitted with >700.GB will automatically be submitted to the dynamic `bigmem.q`.
|
||||
|
||||
> NB: You will need an account and VPN access to use the cluster at MPI-EVA in order to run the pipeline. If in doubt contact the IT team.
|
||||
> NB: Nextflow will need to submit the jobs via SGE to the clusters and as such the commands above will have to be executed on one of the head nodes. If in doubt contact IT.
|
||||
|
|
Loading…
Reference in a new issue