1
0
Fork 0
mirror of https://github.com/MillironX/nf-configs.git synced 2024-11-10 20:13:09 +00:00

initial commit

This commit is contained in:
Spix 2022-03-07 15:01:46 -05:00
parent d0d86bfb2a
commit 429c7b4fa8
5 changed files with 40 additions and 0 deletions

View file

@ -70,6 +70,7 @@ jobs:
- 'utd_ganymede'
- 'utd_sysbio'
- 'uzh'
- 'vai'
steps:
- uses: actions/checkout@v1
- name: Install Nextflow

View file

@ -137,6 +137,7 @@ Currently documentation is available for the following systems:
* [UTD_GANYMEDE](docs/utd_ganymede.md)
* [UTD_SYSBIO](docs/utd_sysbio.md)
* [UZH](docs/uzh.md)
* [VAI](docs/vai.md)
### Uploading to `nf-core/configs`

23
conf/vai.config Normal file
View file

@ -0,0 +1,23 @@
params {
config_profile_description = 'Van Andel Institute HPC profile provided by nf-core/configs.'
config_profile_contact = 'Nathan Spix (@njspix)'
config_profile_url = 'https://vanandelinstitute.sharepoint.com/sites/SC/SitePages/HPC3-High-Performance-Cluster-and-Cloud-Computing.aspx'
max_memory = 250.GB
max_cpus = 40
max_time = 640.h
}
process {
beforeScript = 'module load singularity'
executor = 'pbs'
queue = { task.time <= 48.h ? 'shortq' : 'longq' }
maxRetries = 2
}
singularity {
enabled = true
autoMounts = true
}

14
docs/vai.md Normal file
View file

@ -0,0 +1,14 @@
# nf-core/configs: VAI configuration
All nf-core pipelines have been successfully configured for use on the HPC cluster at Van Andel Institute.
To use, run the pipeline with `-profile vai`. This will download and launch the [`vai.config`](../conf/vai.config) which has been pre-configured with a setup suitable for the VAI HPC. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
```bash
module load singularity
NXF_OPTS="-Xmx500m" MALLOC_ARENA_MAX=4 nextflow run <pipeline>
```
>NB: You will need an account to use the HPC in order to run the pipeline. If in doubt contact IT.
>NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on the login node. If in doubt contact IT.
>NB: The submit node limits the amount of memory available to each user. The `NXF_OPTS` and `MALLOC_ARENA_MAX` parameters above prevent Nextflow from allocating more memory than the scheduler will allow.

View file

@ -64,4 +64,5 @@ profiles {
utd_ganymede { includeConfig "${params.custom_config_base}/conf/utd_ganymede.config" }
utd_sysbio { includeConfig "${params.custom_config_base}/conf/utd_sysbio.config" }
uzh { includeConfig "${params.custom_config_base}/conf/uzh.config" }
vai { includeConfig "${params.custom_config_base}/conf/vai.config" }
}