1
0
Fork 0
mirror of https://github.com/MillironX/nf-configs.git synced 2024-11-21 16:16:04 +00:00

Merge branch 'nf-core:master' into sbc_sharc

This commit is contained in:
Lewis Quayle 2022-09-28 10:49:05 +01:00 committed by GitHub
commit 1eea9b464a
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
4 changed files with 10 additions and 9 deletions

View file

@ -11,9 +11,8 @@ env {
process {
executor = 'slurm'
queue = { task.memory <= 1536.GB ? (task.time > 2.d || task.memory > 384.GB ? 'biohpc_gen_production' : 'biohpc_gen_normal') : 'biohpc_gen_highmem' }
beforeScript = 'module use /dss/dsslegfs02/pn73se/pn73se-dss-0000/spack/modules/x86_avx2/linux*'
module = 'charliecloud/0.22:miniconda3'
queue = { task.memory <= 1536.GB ? (task.time > 2.d || task.memory > 384.GB ? 'biohpc_gen_production' : 'biohpc_gen_normal') : 'biohpc_gen_highmem' }
module = 'charliecloud/0.25'
}
charliecloud {
@ -21,7 +20,7 @@ charliecloud {
}
params {
params.max_time = 14.d
params.max_cpus = 80
params.max_time = 14.d
params.max_cpus = 80
params.max_memory = 3.TB
}

View file

@ -29,9 +29,12 @@ aws {
region = "us-east-1"
client {
uploadChunkSize = 209715200
uploadMaxThreads = 4
}
batch {
maxParallelTransfers = 1
maxTransferAttempts = 5
delayBetweenAttempts = '120 sec'
}
}
executor {

View file

@ -4,14 +4,12 @@ All nf-core pipelines have been successfully configured for use on the BioHPC Ge
To use, run the pipeline with `-profile biohpc_gen`. This will download and launch the [`biohpc_gen.config`](../conf/biohpc_gen.config) which has been pre-configured with a setup suitable for the biohpc_gen cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Charliecloud container before execution of the pipeline.
Before running the pipeline you will need to load Nextflow and Charliecloud using the environment module system on biohpc_gen. You can do this by issuing the commands below:
Before running the pipeline you will need to load Nextflow and Charliecloud using the environment module system on a login node. You can do this by issuing the commands below:
```bash
## Load Nextflow and Charliecloud environment modules
module purge
module load nextflow charliecloud/0.22
module load nextflow/21.04.3 charliecloud/0.25
```
> NB: Charliecloud support requires Nextflow version `21.03.0-edge` or later.
> NB: You will need an account to use the LRZ Linux cluster as well as group access to the biohpc_gen cluster in order to run nf-core pipelines.
> NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes.

View file

@ -8,6 +8,7 @@ This global configuration includes the following tweaks:
- Enable retries by default when exit codes relate to insufficient memory
- Allow pending jobs to finish if the number of retries are exhausted
- Increase the amount of time allowed for file transfers
- Improve reliability of file transfers with retries and reduced concurrency
- Increase the default chunk size for multipart uploads to S3
- Slow down job submission rate to avoid overwhelming any APIs
- Define the `check_max()` function, which is missing in Sarek v2