mirror of
https://github.com/MillironX/nf-configs.git
synced 2024-11-25 17:29:55 +00:00
Add Template for Documentation of Cluster Resources
This commit is contained in:
parent
ef41306a9f
commit
bcd85940fa
2 changed files with 32 additions and 1 deletions
|
@ -47,7 +47,7 @@ nextflow run nf-core/rnaseq --reads '*_R{1,2}.fastq.gz' --genome GRCh37 -c '[pat
|
||||||
|
|
||||||
### Documentation
|
### Documentation
|
||||||
|
|
||||||
You will have to create a [Markdown document](https://www.markdownguide.org/getting-started/) outlining the details required to use the custom config file within your organisation.
|
You will have to create a [Markdown document](https://www.markdownguide.org/getting-started/) outlining the details required to use the custom config file within your organisation. You might orientate yourself using the [Template](docs/template.md) that we provide and filling out the information for your cluster there.
|
||||||
|
|
||||||
See [`nf-core/configs/docs`](https://github.com/nf-core/configs/tree/master/docs) for examples.
|
See [`nf-core/configs/docs`](https://github.com/nf-core/configs/tree/master/docs) for examples.
|
||||||
|
|
||||||
|
|
31
docs/template.md
Normal file
31
docs/template.md
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
# nf-core/configs: PROFILE Configuration
|
||||||
|
|
||||||
|
All nf-core pipelines have been successfully configured for use on the PROFILE CLUSTER at the insert institution here.
|
||||||
|
|
||||||
|
To use, run the pipeline with `-profile PROFILENAME`. This will download and launch the [`profile.config`](../conf/profile.config) which has been pre-configured with a setup suitable for the PROFILE cluster. Using this profile, Nextflow will download a singularity image with all of the required software before execution of the pipeline.
|
||||||
|
|
||||||
|
## Below are non-mandatory information e.g. on modules to load etc.
|
||||||
|
|
||||||
|
Before running the pipeline you will need to load Nextflow and Singularity using the environment module system on PROFILE CLUSTER. You can do this by issuing the commands below:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
## Load Nextflow and Singularity environment modules
|
||||||
|
module purge
|
||||||
|
module load Nextflow/0.32.0
|
||||||
|
module load Singularity/2.6.0
|
||||||
|
|
||||||
|
## Example command for nf-core/atacseq
|
||||||
|
nextflow run nf-core/atacseq -profile PROFILE --genome GRCh37 --design /path/to/design.csv --email test.user@crick.ac.uk
|
||||||
|
```
|
||||||
|
|
||||||
|
## Below are non-mandatory information on iGenomes specific configuration
|
||||||
|
|
||||||
|
A local copy of the iGenomes resource has been made available on PROFILE CLUSTER so you should be able to run the pipeline against any reference available in the `igenomes.config` specific to the nf-core pipeline. You can do this by simply using the `--genome <GENOME_ID>` parameter. Some of the more exotic genomes may not have been downloaded onto PROFILE CLUSTER so have a look in the `igenomes_base` path specified in [`profile.config`](../conf/profile.config), and if your genome of interest isnt present please contact [local_contact_name_for_profile](mailto:local_contact_handle).
|
||||||
|
|
||||||
|
Alternatively, if you are running the pipeline regularly for genomes that arent available in the iGenomes resource, we recommend creating a config file with paths to your reference genome indices (see [`reference genomes documentation`](https://github.com/nf-core/atacseq/blob/master/docs/configuration/reference_genomes.md) for instructions).
|
||||||
|
|
||||||
|
All of the intermediate files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway.
|
||||||
|
|
||||||
|
>NB: You will need an account to use the HPC cluster on PROFILE CLUSTER in order to run the pipeline. If in doubt contact IT.
|
||||||
|
|
||||||
|
>NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT.
|
Loading…
Reference in a new issue