mirror of
https://github.com/MillironX/nf-configs.git
synced 2024-11-22 00:26:03 +00:00
Merge pull request #235 from aunderwo/cambridge
Add Cambridge University HPC config
This commit is contained in:
commit
c12c373969
3 changed files with 37 additions and 0 deletions
|
@ -100,6 +100,7 @@ Currently documentation is available for the following systems:
|
||||||
* [BI](docs/bi.md)
|
* [BI](docs/bi.md)
|
||||||
* [BINAC](docs/binac.md)
|
* [BINAC](docs/binac.md)
|
||||||
* [BIOHPC_GEN](docs/biohpc_gen.md)
|
* [BIOHPC_GEN](docs/biohpc_gen.md)
|
||||||
|
* [CAMBRIDGE](docs/cambridge.md)
|
||||||
* [CBE](docs/cbe.md)
|
* [CBE](docs/cbe.md)
|
||||||
* [CCGA_DX](docs/ccga_dx.md)
|
* [CCGA_DX](docs/ccga_dx.md)
|
||||||
* [CCGA_MED](docs/ccga_med.md)
|
* [CCGA_MED](docs/ccga_med.md)
|
||||||
|
|
18
conf/cambridge.config
Normal file
18
conf/cambridge.config
Normal file
|
@ -0,0 +1,18 @@
|
||||||
|
params {
|
||||||
|
config_profile_description = 'Cambridge HPC cluster profile.'
|
||||||
|
config_profile_contact = 'Andries van Tonder (ajv37@cam.ac.uk)'
|
||||||
|
config_profile_url = "https://docs.hpc.cam.ac.uk/hpc"
|
||||||
|
}
|
||||||
|
singularity {
|
||||||
|
enabled = true
|
||||||
|
autoMounts = true
|
||||||
|
}
|
||||||
|
process {
|
||||||
|
executor = 'slurm'
|
||||||
|
clusterOptions = '-p cclake'
|
||||||
|
}
|
||||||
|
params {
|
||||||
|
max_memory = 192.GB
|
||||||
|
max_cpus = 56
|
||||||
|
max_time = 12.h
|
||||||
|
}
|
18
docs/cambridge.md
Normal file
18
docs/cambridge.md
Normal file
|
@ -0,0 +1,18 @@
|
||||||
|
# nf-core/configs: Cambridge HPC Configuration
|
||||||
|
|
||||||
|
All nf-core pipelines have been successfully configured for use on the Cambridge HPC cluster at the [The University of Cambridge](https://www.cam.ac.uk/).
|
||||||
|
To use, run the pipeline with `-profile cambridge`. This will download and launch the [`cambridge.config`](../conf/cambridge.config) whichhas been pre-configured
|
||||||
|
with a setup suitable for the Cambridge HPC cluster. Using this profile, either a docker image containing all of the required software will be downloaded,
|
||||||
|
and converted to a Singularity image or a Singularity image downloaded directly before execution of the pipeline.
|
||||||
|
|
||||||
|
The latest version of Nextflow is not installed by default on the Cambridge HPC cluster. You will need to install it into a directory you have write access to.
|
||||||
|
Follow these instructions from the Nextflow documentation.
|
||||||
|
|
||||||
|
- Install Nextflow : [here](https://www.nextflow.io/docs/latest/getstarted.html#)
|
||||||
|
|
||||||
|
All of the intermediate files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline
|
||||||
|
has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway.
|
||||||
|
|
||||||
|
> NB: You will need an account to use the Cambridge HPC cluster in order to run the pipeline. If in doubt contact IT.
|
||||||
|
> NB: Nextflow will need to submit the jobs via SLURM to the Cambridge HPC cluster and as such the commands above will have to be executed on one of the login
|
||||||
|
nodes. If in doubt contact IT.
|
Loading…
Reference in a new issue