mirror of
https://github.com/MillironX/nf-configs.git
synced 2024-11-25 09:19:56 +00:00
Written a help file
This commit is contained in:
parent
0bfedde296
commit
a97b887d93
1 changed files with 11 additions and 5 deletions
|
@ -1,9 +1,15 @@
|
|||
# nf-core/configs: BI Configuration
|
||||
# nf-core/configs: Cancer Research UK Manchester Institute Configuration
|
||||
|
||||
All nf-core pipelines have been successfully configured for use at Boehringer Ingelheim.
|
||||
All nf-core pipelines have been successfully configured for the use on the HPC (phoenix) at Cancer Research UK Manchester Institute.
|
||||
|
||||
To use, run the pipeline with `-profile bi`. This will download and launch the [`bi.config`](../conf/bi.config) which has been pre-configured with a setup suitable for the BI systems. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
|
||||
To use, run the pipeline with `-profile crukmi`. This will download and launch the [`crukmi.config`](../conf/crukmi.config) which has been pre-configured with a setup suitable for the phoenix HPC. Using this profile, singularity images will be downloaded to run on the cluster.
|
||||
|
||||
Before running the pipeline you will need to follow the internal documentation to run Nextflow on our systems. Similar to that, you need to set an environment variable `NXF_GLOBAL_CONFIG` to the path of the internal global config which is not publicly available here.
|
||||
Before running the pipeline you will need to load Nextflow using the environment module system, for example via:
|
||||
|
||||
> NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT.
|
||||
```bash
|
||||
## Load Nextflow and Singularity environment modules
|
||||
module purge
|
||||
module load apps/nextflow/22.04.5
|
||||
```
|
||||
|
||||
The pipeline should always be executed inside a workspace on the `/scratch/` system. All of the intermediate files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory.
|
Loading…
Reference in a new issue