1
0
Fork 0
mirror of https://github.com/MillironX/nf-configs.git synced 2024-11-25 09:19:56 +00:00

feat: more complete docs

This commit is contained in:
MaxUlysse 2021-04-20 15:38:49 +02:00
parent 10abca712b
commit 4930795cab

View file

@ -8,30 +8,32 @@ We have a Slack channel dedicated to UPPMAX users on the nf-core Slack: [https:/
## Using the UPPMAX config profile ## Using the UPPMAX config profile
To use, run the pipeline with `-profile uppmax` (one hyphen).
This will download and launch the [`uppmax.config`](../conf/uppmax.config) which has been pre-configured with a setup suitable for the UPPMAX servers.
Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
In addition to this config profile, you will also need to specify an UPPMAX project id.
You can do this with the `--project` flag (two hyphens) when launching nextflow. For example:
```bash
nextflow run nf-core/PIPELINE -profile uppmax --project snic2018-1-234 # ..rest of pipeline flags
```
$ NB: If you're not sure what your UPPMAX project ID is, try running `groups` or checking SUPR.
Before running the pipeline you will need to either install Nextflow or load it using the environment module system. Before running the pipeline you will need to either install Nextflow or load it using the environment module system.
This config enables Nextflow to manage the pipeline jobs via the Slurm job scheduler and using Singularity for software management. To use, run the pipeline with `-profile uppmax` (one hyphen).
This will download and launch the [`uppmax.config`](../conf/uppmax.config) which has been pre-configured with a setup suitable for the UPPMAX servers.
It will enable Nextflow to manage the pipeline jobs via the Slurm job scheduler.
Using this profile, Docker image(s) containing required software(s) will be downloaded, and converted to Singularity image(s) before execution of the pipeline.
In addition to this config profile, you will also need to specify an UPPMAX project id.
You can do this with the `--project` flag (two hyphens) when launching nextflow.
For example:
```bash
# Launch a nf-core pipeline with the uppmax profile for the project id snic2018-1-234
$ nextflow run nf-core/<PIPELINE> -profile uppmax --project snic2018-1-234 [...]
```
> NB: If you're not sure what your UPPMAX project ID is, try running `groups` or checking SUPR.
Just run Nextflow on a login node and it will handle everything else. Just run Nextflow on a login node and it will handle everything else.
Remember to use `-bg` to launch Nextflow in the background, so that the pipeline doesn't exit if you leave your terminal session. Remember to use `-bg` to launch Nextflow in the background, so that the pipeline doesn't exit if you leave your terminal session.
Alternatively, you can also launch Nextflow in a `screen` or a `tmux` session.
## Using iGenomes references ## Using iGenomes references
A local copy of the iGenomes resource has been made available on all UPPMAX clusters so you should be able to run the pipeline against any reference available in the `igenomes.config`. A local copy of the iGenomes resource has been made available on all UPPMAX clusters so you should be able to run the pipeline against any reference available in the `conf/igenomes.config`.
You can do this by simply using the `--genome <GENOME_ID>` parameter. You can do this by simply using the `--genome <GENOME_ID>` parameter.
## Getting more memory ## Getting more memory
@ -43,7 +45,7 @@ If your nf-core pipeline run is running out of memory, you can run on a fat node
``` ```
This raises the ceiling of available memory from the default of `128.GB` to `256.GB`. This raises the ceiling of available memory from the default of `128.GB` to `256.GB`.
`Rackham` has nodes with 128GB, 256GB and 1TB memory available. `rackham` has nodes with 128GB, 256GB and 1TB memory available.
Note that each job will still start with the same request as normal, but restarted attempts with larger requests will be able to request greater amounts of memory. Note that each job will still start with the same request as normal, but restarted attempts with larger requests will be able to request greater amounts of memory.
@ -53,15 +55,15 @@ All jobs will be submitted to fat nodes using this method, so it's only for use
The UPPMAX nf-core configuration profile uses the `hostname` of the active environment to automatically apply the following resource limits: The UPPMAX nf-core configuration profile uses the `hostname` of the active environment to automatically apply the following resource limits:
* `rackham`
* cpus available: 20 cpus
* memory available: 125 GB
* `bianca` * `bianca`
* cpus available: 16 cpus * cpus available: 16 cpus
* memory available: 109 GB * memory available: 109 GB
* `irma` * `irma`
* cpus available: 16 cpus * cpus available: 16 cpus
* memory available: 250 GB * memory available: 250 GB
* `rackham`
* cpus available: 20 cpus
* memory available: 125 GB
## Development config ## Development config
@ -74,42 +76,41 @@ It is not suitable for use with real data.
To use it, submit with `-profile uppmax,devel`. To use it, submit with `-profile uppmax,devel`.
## Running on Bianca ## Running on bianca
$ :warning: For more information about `bianca`, follow the [UPPMAX `bianca` user guide](http://uppmax.uu.se/support/user-guides/bianca-user-guide/). > :warning: For more information about `bianca`, follow the [UPPMAX `bianca` user guide](http://uppmax.uu.se/support/user-guides/bianca-user-guide/).
$ :warning: For more information, follow the [nf-core guide for running offline](https://nf-co.re/usage/offline) and the [nf-core `tools` guide for downloading pipelines for offline use](https://nf-co.re/tools#downloading-pipelines-for-offline-use). > :warning: For more information, follow the [nf-core guide for running offline](https://nf-co.re/usage/offline) and the [nf-core `tools` guide for downloading pipelines for offline use](https://nf-co.re/tools#downloading-pipelines-for-offline-use).
> :warning: For more information about using `Singularity` with UPPMAX, follow the [UPPMAX `Singularity` guide](https://www.uppmax.uu.se/support-sv/user-guides/singularity-user-guide/). > :warning: For more information about using `Singularity` with UPPMAX, follow the [UPPMAX `Singularity` guide](https://www.uppmax.uu.se/support-sv/user-guides/singularity-user-guide/).
For security reasons, there is no internet access on `bianca` so you can't download from or upload files to the cluster directly. For security reasons, there is no internet access on `bianca` so you can't download from or upload files to the cluster directly.
Before running a nf-core pipeline on `bianca` you will first have to download the pipeline and singularity images needed elsewhere and transfer them via the wharf area to your own `bianca` project. Before running a nf-core pipeline on `bianca` you will first have to download the pipeline and singularity images needed elsewhere and transfer them via the `wharf` area to your own `bianca` project.
In this guide, we use `rackham` to download and transfer files to wharf, but it can also be done on your own computer. In this guide, we use `rackham` to download and transfer files to the `wharf` area, but it can also be done on your own computer.
If you use `rackham` to download the pipeline and the singularity containers, we recommend using an interactive session (cf [interactive guide](https://www.uppmax.uu.se/support/faq/running-jobs-faq/how-can-i-run-interactively-on-a-compute-node/)), which is what we do in the following guide. If you use `rackham` to download the pipeline and the singularity containers, we recommend using an interactive session (cf [interactive guide](https://www.uppmax.uu.se/support/faq/running-jobs-faq/how-can-i-run-interactively-on-a-compute-node/)), which is what we do in the following guide.
### Download Nextflow ### Download and install Nextflow
You can use the UPPMAX provided `module`, but if necessary, you can also download a more recent version. You can use the Nextflow UPPMAX provided `module`, but if necessary, you can also download a more recent version.
```bash ```bash
# Connect to rackham # Connect to rackham
$ ssh -X <user>@rackham.uppmax.uu.se $ ssh -X <USER>@rackham.uppmax.uu.se
# Or stay in your terminal # Or stay in your terminal
# Download the nextflow-all bundle # Download the nextflow-all bundle
$ wget https://github.com/nextflow-io/nextflow/releases/download/v<nextflow_version>/nextflow-<nextflow_version>-all $ wget https://github.com/nextflow-io/nextflow/releases/download/v<NEXTFLOW_VERSION>/nextflow-<NEXTFLOW_VERSION>-all
# Connect to wharf using sftp # Connect to the wharf area using sftp
# For FileZilla follow the bianca user guide $ sftp <USER>-<BIANCA_PROJECT>@bianca-sftp.uppmax.uu.se:<USER>-<BIANCA_PROJECT>
$ sftp <user>-<bianca_project>@bianca-sftp.uppmax.uu.se:<user>-<bianca_project>
# Transfer nextflow to wharf # Transfer nextflow to the wharf area
sftp> put nextflow-<nextflow_version>-all . sftp> put nextflow-<NEXTFLOW_VERSION>-all .
# Exit sftp # Exit sftp
$ exit $ exit
# Connect to bianca # Connect to bianca
$ ssh -A <user>-<bianca_project>@bianca.uppmax.uu.se $ ssh -A <USER>-<BIANCA_PROJECT>@bianca.uppmax.uu.se
# Go to your project # Go to your project
$ cd /castor/project/proj_nobackup $ cd /castor/project/proj_nobackup
@ -118,19 +119,18 @@ $ cd /castor/project/proj_nobackup
$ mkdir tools $ mkdir tools
$ mkdir tools/nextflow $ mkdir tools/nextflow
# Move Nextflow from wharf to its directory # Move Nextflow from the wharf area to its directory
$ mv /castor/project/proj_nobackup/wharf/<user>/<user>-<bianca_project>/nextflow-<nextflow_version>-all /castor/project/proj_nobackup/tools/nextflow $ mv /castor/project/proj_nobackup/wharf/<USER>/<USER>-<BIANCA_PROJECT>/nextflow-<NEXTFLOW_VERSION>-all /castor/project/proj_nobackup/tools/nextflow
# Establish permission # Establish permission
$ chmod a+x /castor/project/proj_nobackup/tools/nextflow/nextflow-<nextflow_version>-all $ chmod a+x /castor/project/proj_nobackup/tools/nextflow/nextflow-<NEXTFLOW_VERSION>-all
# If you want other people to use it # If you want other people to use it
# Be sure that your group has rights to the directory as well # Be sure that your group has rights to the directory as well
$ chown -R .<BIANCA_PROJECT> /castor/project/proj_nobackup/tools/nextflow/nextflow-<NEXTFLOW_VERSION>-all
$ chown -R .<bianca_project> /castor/project/proj_nobackup/tools/nextflow/nextflow-<nextflow_version>-all
# Make a link to it # Make a link to it
$ ln -s /castor/project/proj_nobackup/tools/nextflow/nextflow-<nextflow_version>-all /castor/project/proj_nobackup/tools/nextflow/nextflow $ ln -s /castor/project/proj_nobackup/tools/nextflow/nextflow-<NEXTFLOW_VERSION>-all /castor/project/proj_nobackup/tools/nextflow/nextflow
# And every time you're launching Nextflow, don't forget to export the following ENV variables # And every time you're launching Nextflow, don't forget to export the following ENV variables
# Or add them to your .bashrc file # Or add them to your .bashrc file
@ -141,107 +141,105 @@ $ export NXF_LAUNCHER=$SNIC_TMP
$ export NXF_SINGULARITY_CACHEDIR=/castor/project/proj_nobackup/singularity-images $ export NXF_SINGULARITY_CACHEDIR=/castor/project/proj_nobackup/singularity-images
``` ```
### Download nf-core pipelines ### Install nf-core tools
```bash ```bash
# Connect to rackham # Connect to rackham
$ ssh -X <user>@rackham.uppmax.uu.se $ ssh -X <USER>@rackham.uppmax.uu.se
# Or stay in your terminal
# Install the latest pip version
$ pip3 install --upgrade --force-reinstall git+https://github.com/nf-core/tools.git@dev --user
```
### Download and transfer a nf-core pipeline
```bash
# Connect to rackham
$ ssh -X <USER>@rackham.uppmax.uu.se
# Or stay in your terminal # Or stay in your terminal
# Open an interactive session (if you are on rackham) # Open an interactive session (if you are on rackham)
$ interactive <rackham_project> $ interactive <rackham_project>
# Install the latest pip version
$ pip3 install --upgrade --force-reinstall git+https://github.com/nf-core/tools.git@dev --user
# Download a pipeline (nf-core/rnaseq 3.0) with the singularity images # Download a pipeline (nf-core/rnaseq 3.0) with the singularity images
$ nf-core download rnaseq -r 3.0 -s -p 10 --compress none $ nf-core download rnaseq -r 3.0 -s --compress none
# Download a pipeline (nf-core/sarek 2.7) with the singularity images # If necessary, extra singularity images can be download separately
$ nf-core download sarek -r 2.7 -s -p 10 --compress none # For example, if you downloaded nf-core/sarek, you will need extra images for annotation
# Here we download the nf-core/sarek GRCh38 specific images
# Download specific Singularity images
$ singularity pull --name nfcore-sareksnpeff-2.7.GRCh38.img docker://nfcore/sareksnpeff:2.7.GRCh38 $ singularity pull --name nfcore-sareksnpeff-2.7.GRCh38.img docker://nfcore/sareksnpeff:2.7.GRCh38
$ singularity pull --name nfcore-sarekvep-2.7.GRCh38.img docker://nfcore/sarekvep:2.7.GRCh38 $ singularity pull --name nfcore-sarekvep-2.7.GRCh38.img docker://nfcore/sarekvep:2.7.GRCh38
# Move specific Singularity images into nf-core download folder # Which can then be moved into the nf-core/sarek download folder
$ mv *.img nf-core-sarek-2.7/singularity-images/. $ mv *.img nf-core-sarek-2.7/singularity-images/.
# Connect to wharf using sftp # Connect to the wharf area using sftp
$ sftp <user>-<bianca_project>@bianca-sftp.uppmax.uu.se:<user>-<bianca_project> $ sftp <USER>-<BIANCA_PROJECT>@bianca-sftp.uppmax.uu.se:<USER>-<BIANCA_PROJECT>
# Transfer rnaseq folder from rackham to wharf # Transfer rnaseq folder from rackham to the wharf area
sftp> put -r nf-core-rnaseq-3.0 . sftp> put -r nf-core-rnaseq-3.0 .
[...]
# Transfer sarek folder from rackham to wharf
sftp> put -r nf-core-sarek-2.7 .
[...]
# The archives will be in the wharf folder in your user home on your bianca project # The archives will be in the wharf folder in your user home on your bianca project
# Connect to bianca # Connect to bianca
$ ssh -A <user>-<bianca_project>@bianca.uppmax.uu.se $ ssh -A <USER>-<BIANCA_PROJECT>@bianca.uppmax.uu.se
# Go to your project # Go to your project
$ cd /castor/project/proj_nobackup $ cd /castor/project/proj_nobackup
# Make and go into a nf-core/sarek directory (where you will store all nf-core/sarek versions) # Make and go into a nf-core directory (where you will store all nf-core pipelines')
$ mkdir sarek $ mkdir f-core
$ cd sarek $ cd nf-core
# Copy the tar from wharf to the project # Move the folder from the wharf area to the project
$ cp /castor/project/proj_nobackup/wharf/<user>/<user>-<bianca_project>/nf-core-sarek-2.7 /castor/project/proj_nobackup/sarek $ cp /castor/project/proj_nobackup/wharf/<USER>/<USER>-<BIANCA_PROJECT>/nf-core-rnaseq-3.0 .
# If you want other people to use it, # If you want other people to use it,
# Be sure that your group has rights to the directory as well # Be sure that your group has rights to the directory as well
$ chown -R .<bianca_project> nf-core-sarek-2.7 $ chown -R .<BIANCA_PROJECT> nf-core-rnaseq-3.0
# Make a symbolic link to the extracted repository # Make a symbolic link to the extracted repository
$ ln -s nf-core-sarek-2.7 default $ ln -s nf-core-rnaseq-3.0 nf-core-rnaseq-default
``` ```
The principle is to have every member of your project to be able to use the same nf-core/sarek version at the same time. So every member of the project who wants to use nf-core/sarek will need to do: The principle is to have every member of your project to be able to use the same nf-core/rnaseq version at the same time.
So every member of the project who wants to use nf-core/rnaseq will need to do:
```bash ```bash
# Connect to bianca # Connect to bianca
$ ssh -A <user>-<bianca_project>@bianca.uppmax.uu.se $ ssh -A <USER>-<BIANCA_PROJECT>@bianca.uppmax.uu.se
# Go to your user directory # Go to your user directory
$ cd /home/<user> $ cd /home/<USER>
# Make a symbolic link to the default nf-core/sarek # Make a symbolic link to the default nf-core/rnaseq
$ ln -s /castor/project/proj_nobackup/sarek/default sarek $ ln -s /castor/project/proj_nobackup/nf-core/nf-core-rnaseq-default nf-core-rnaseq
``` ```
And then nf-core/sarek can be used with: And then nf-core/rnaseq can be used with:
```bash ```bash
$ nextflow run ~/sarek -profile uppmax --project <bianca_project> --genome [GENOME ASSEMBLY] ... # run rnaseq on bianca
$ nextflow run ~/rnaseq -profile uppmax --project <BIANCA_PROJECT> --genome [GENOME ASSEMBLY] ...
``` ```
This is an example of how to run sarek with the tool Manta and the genome assembly version GRCh38, which worked on `bianca` 20210416 ## Update a pipeline
```bash To update, repeat the same steps as for installing and update the link.
$ nextflow run ~/sarek -profile uppmax --project <bianca_project> --tools Manta --input <input.tsv>
```
## Update nf-core/sarek
Repeat the same steps as for installing nf-core/sarek, and update the link.
```bash ```bash
# Connect to bianca (Connect to rackham first if needed) # Connect to bianca (Connect to rackham first if needed)
$ ssh -A <user>-<bianca_project>@bianca.uppmax.uu.se $ ssh -A <USER>-<BIANCA_PROJECT>@bianca.uppmax.uu.se
# Go to the sarek directory in your project # Go to the rnaseq directory in your project
$ cd /castor/project/proj_nobackup/sarek $ cd /castor/project/proj_nobackup/nf-core
# Remove link # Remove link
$ rm default $ unlink nf-core-rnaseq-default
# Link to new nf-core/sarek version # Link to new nf-core/rnaseq version
$ ln -s nf-core-sarek-2.7 default $ ln -s nf-core-rnaseq-3.0 nf-core-rnaseq-default
``` ```
You can for example keep a `default` version that you are sure is working, an make a link for a `testing` or `development` You can for example keep a `nf-core-rnaseq-default` version that you are sure is working, an make a link for a `nf-core-rnaseq-testing` or `nf-core-rnaseq-development`.