3.9 KiB
nf-core/configs: Eddie Configuration
nf-core pipelines sarek, rnaseq, and atacseq have all been tested on the University of Edinburgh Eddie HPC.
Getting help
There is a Slack channel dedicated to eddie users on the MRC IGMM Slack: https://igmm.slack.com/channels/eddie3
Using the Eddie config profile
To use, run the pipeline with -profile eddie
(one hyphen).
This will download and launch the eddie.config
which has been pre-configured with a setup suitable for the University of Edinburgh Eddie HPC.
The configuration file supports running nf-core pipelines with Docker containers running under Singularity by default. Conda is not currently supported.
nextflow run nf-core/PIPELINE -profile eddie # ...rest of pipeline flags
Before running the pipeline you will need to install Nextflow or load it from the module system. Generally the most recent version will be the one you want. If you want to run a Nextflow pipeline that is based on DSL2, you will need a version that ends with '-edge'.
To list versions:
module avail igmm/apps/nextflow
To load the most recent version:
module load igmm/apps/nextflow
This config enables Nextflow to manage the pipeline jobs via the SGE job scheduler and using Singularity for software management.
Singularity set-up
Load Singularity from the module system and set the Singularity cache directory to the NextGenResources path for the pipeline and version you want to run. If this does not exist, please contact the IGMM Data Manager to have it added. You can add these lines to the file $HOME/.bashrc
, or you can run these commands before you run an nf-core pipeline.
module load singularity
export NXF_SINGULARITY_CACHEDIR="/exports/igmm/eddie/NextGenResources/nextflow/singularity/nf-core-rnaseq_v3.0"
Singularity will create a directory .singularity
in your $HOME
directory on eddie. Space on $HOME
is very limited, so it is a good idea to create a directory somewhere else with more room and link the locations.
cd $HOME
mkdir /exports/eddie/path/to/my/area/.singularity
ln -s /exports/eddie/path/to/my/area/.singularity .singularity
Running Nextflow
On a login node
You can use a qlogin to run Nextflow, if you request more than the default 2GB of memory. Unfortunately you can't submit the initial Nextflow run process as a job as you can't qsub within a qsub.
qlogin -l h_vmem=8G
If your eddie terminal disconnects your Nextflow job will stop. You can run Nextflow as a bash script on the command line using nohup
to prevent this.
nohup ./nextflow_run.sh &
On a wild west node
Wild west nodes on eddie can be accessed via ssh (node2c15, node2c16, node3g22). To run Nextflow on one of these nodes, do it within a screen session.
Start a new screen session.
screen -S <session_name>
List existing screen sessions
screen -ls
Reconnect to an existing screen session
screen -r <session_name>
Using iGenomes references
A local copy of the iGenomes resource has been made available on the Eddie HPC so you should be able to run the pipeline against any reference available in the igenomes.config
.
You can do this by simply using the --genome <GENOME_ID>
parameter.
Adjusting maximum resources
This config is set for IGMM standard nodes which have 32 cores and 384GB memory. If you are a non-IGMM user, please see the ECDF specification and adjust the --clusterOptions
flag appropriately, e.g.
--clusterOptions "-C mem256GB" --max_memory "256GB"