1
0
Fork 0
mirror of https://github.com/MillironX/nf-configs.git synced 2024-11-22 00:26:03 +00:00

Run prettier

This commit is contained in:
ameynert 2022-09-01 09:28:05 +01:00 committed by GitHub
parent 8e275182ea
commit 28e59e38b4
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -1,26 +1,36 @@
# nf-core/configs: CRA HPC Configuration # nf-core/configs: CRA HPC Configuration
nfcore pipeline sarek and rnaseq have been tested on the CRA HPC. nfcore pipeline sarek and rnaseq have been tested on the CRA HPC.
## Before running the pipeline ## Before running the pipeline
- You will need an account to use the CRA HPC cluster in order to run the pipeline. - You will need an account to use the CRA HPC cluster in order to run the pipeline.
- Make sure that Singularity and Nextflow are installed. - Make sure that Singularity and Nextflow are installed.
- Downlode pipeline singularity images to a HPC system using [nf-core tools](https://nf-co.re/tools/#downloading-pipelines-for-offline-use) - Downlode pipeline singularity images to a HPC system using [nf-core tools](https://nf-co.re/tools/#downloading-pipelines-for-offline-use)
``` ```
$ conda install nf-core $ conda install nf-core
$ nf-core download $ nf-core download
``` ```
- You will need to specify a Singularity cache directory in your ~./bashrc. This will store your container images in this cache directory without repeatedly downloading them every time you run a pipeline. Since space on home directory is limited, using lustre file system is recommended. - You will need to specify a Singularity cache directory in your ~./bashrc. This will store your container images in this cache directory without repeatedly downloading them every time you run a pipeline. Since space on home directory is limited, using lustre file system is recommended.
``` ```
export NXF_SINGULARITY_CACHEDIR = "/lustre/fs0/storage/yourCRAAccount/cache_dir" export NXF_SINGULARITY_CACHEDIR = "/lustre/fs0/storage/yourCRAAccount/cache_dir"
``` ```
- Download iGenome reference to be used as a local copy. - Download iGenome reference to be used as a local copy.
``` ```
$ aws s3 --no-sign-request --region eu-west-1 sync s3://ngi-igenomes/igenomes/Homo_sapiens/GATK/GRCh38/ /lustre/fs0/storage/yourCRAAccount/references/Homo_sapiens/GATK/GRCh38/ $ aws s3 --no-sign-request --region eu-west-1 sync s3://ngi-igenomes/igenomes/Homo_sapiens/GATK/GRCh38/ /lustre/fs0/storage/yourCRAAccount/references/Homo_sapiens/GATK/GRCh38/
``` ```
## Running the pipeline using the adcra config profile ## Running the pipeline using the adcra config profile
- Run the pipeline within a [screen](https://linuxize.com/post/how-to-use-linux-screen/) or [tmux](https://linuxize.com/post/getting-started-with-tmux/) session. - Run the pipeline within a [screen](https://linuxize.com/post/how-to-use-linux-screen/) or [tmux](https://linuxize.com/post/getting-started-with-tmux/) session.
- Specify the config profile with ```-profile adcra```. - Specify the config profile with `-profile adcra`.
- Using lustre file systems to store results (```--outdir```) and intermediate files (```-work-dir```) is recommended. - Using lustre file systems to store results (`--outdir`) and intermediate files (`-work-dir`) is recommended.
``` ```
nextflow run /path/to/nf-core/<pipeline-name> -profile adcra \ nextflow run /path/to/nf-core/<pipeline-name> -profile adcra \
--genome GRCh38 \ --genome GRCh38 \