1
0
Fork 0
mirror of https://github.com/MillironX/nf-configs.git synced 2024-12-22 10:38:16 +00:00

Merge pull request #93 from jfy133/master

Updated max resources for SHH SDAG
This commit is contained in:
Alexander Peltzer 2019-11-26 15:56:39 +01:00 committed by GitHub
commit 81739a3907
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
2 changed files with 5 additions and 3 deletions

View file

@ -1,7 +1,7 @@
//Profile config names for nf-core/configs
params {
config_profile_description = 'MPI SHH cluster profile provided by nf-core/configs.'
config_profile_contact = 'James Fellows Yates (@jfy133)'
config_profile_contact = 'James Fellows Yates (@jfy133), Maxime Borry (@Maxibor)'
config_profile_url = 'https://shh.mpg.de'
}
@ -23,8 +23,8 @@ executor {
params {
max_memory = 2.TB
max_cpus = 32
max_cpus = 128
max_time = 720.h
//Illumina iGenomes reference file path
igenomes_base = "/projects1/public_data/igenomes/"
}
}

View file

@ -11,7 +11,9 @@ To use, run the pipeline with `-profile shh`. This will download and launch the
however this will likely change to a read-only directory in the future that will be managed by IT.
This configuration will automatically choose the correct SLURM queue (`short`,`medium`,`long`,`supercruncher`) depending on the time and memory required by each process.
Please note that there is no `supercruncher` queue on CDAG.
>NB: You will need an account and VPN access to use the cluster at MPI-SHH in order to run the pipeline. If in doubt contact IT.
>NB: Nextflow will need to submit the jobs via SLURM to the clusters and as such the commands above will have to be executed on one of the head nodes. If in doubt contact IT.
>NB: The maximum CPUs/Mem are currently adapted for SDAG resource maximums - i.e. will exceed CDAG. Be careful when running larges jobs that error-retries may exceed limits and get 'stuck' in SLURM.