1
0
Fork 0
mirror of https://github.com/MillironX/nf-configs.git synced 2024-11-22 08:29:54 +00:00

Merge pull request #44 from drpatelh/dev

Add proper nf-core logo
This commit is contained in:
Alexander Peltzer 2019-06-01 12:36:57 +02:00 committed by GitHub
commit 515e9484de
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 6 additions and 20 deletions

View file

@ -1,6 +1,4 @@
<img src="docs/images/nf-core-logo.png" width="400"> # ![nf-core/configs](docs/images/nfcore-configs_logo.png)
# [nf-core/configs](https://github.com/nf-core/configs)
[![Build Status](https://travis-ci.org/nf-core/configs.svg?branch=master)](https://travis-ci.org/nf-core/configs) [![Build Status](https://travis-ci.org/nf-core/configs.svg?branch=master)](https://travis-ci.org/nf-core/configs)

View file

@ -10,7 +10,7 @@ manifest {
} }
process { process {
beforeScript = {'module load Singularity; module load Miniconda3'} beforeScript = 'module load Miniconda3/4.6.7'
executor = 'pbspro' executor = 'pbspro'
clusterOptions = { "-P $params.project" } clusterOptions = { "-P $params.project" }
} }

View file

@ -10,7 +10,7 @@ Before running the pipeline you will need to load Nextflow and Singularity using
## Load Nextflow and Singularity environment modules ## Load Nextflow and Singularity environment modules
module purge module purge
module load devel/java_jdk/1.8.0u121 module load devel/java_jdk/1.8.0u121
module load qbic/singularity_slurm/3.0.1 module load qbic/singularity_slurm/3.0.3
``` ```

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

View file

@ -2,27 +2,15 @@
All nf-core pipelines have been successfully configured for use on the MENDEL CLUSTER at the Gregor Mendel Institute (GMI). All nf-core pipelines have been successfully configured for use on the MENDEL CLUSTER at the Gregor Mendel Institute (GMI).
To use, run the pipeline with `-profile conda,mendel`. This will download and launch the [`mendel.config`](../conf/mendel.config) which has been pre-configured with a setup suitable for the MENDEL cluster. A Conda environment will be created automatically and software dependencies will be downloaded from ['bioconda'](https://bioconda.github.io/). To use, run the pipeline with `-profile conda,mendel`. This will download and launch the [`mendel.config`](../conf/mendel.config) which has been pre-configured with a setup suitable for the MENDEL cluster. A Conda environment will be created automatically and software dependencies will be resolved via [bioconda](https://bioconda.github.io/).
Theoretically, using `-profile singularity,mendel` would download a docker image containing all of the required software, and convert it to a Singularity image before execution of the pipeline. However, there is a regression in the Singularity deployment on MENDEL which renders containers downloaded from public repositories unusable because they lack the /lustre mountpoint. Before running the pipeline you will need to load Conda using the environment module system on MENDEL. You can do this by issuing the commands below:
If you want to run the pipeline containerized anyway you will have to build the image yourself (on a machine where you have root access) using the provided `Singularity` file in the pipeline repository:
```bash
cd /path/to/pipeline-repository
echo 'mkdir /lustre > Singularity'
singularity build nf-core-methylseq-custom.simg Singularity
```
After you copied the container image to the cluster filesystem, make sure to pass the path to the image to the pipeline with `-with-singularity /path/to/nf-core-methylseq-custom.simg`
Before running the pipeline you will need to load Nextflow and Conda using the environment module system on MENDEL. You can do this by issuing the commands below:
```bash ```bash
## Load Nextflow and Conda environment modules ## Load Nextflow and Conda environment modules
module purge module purge
module load Nextflow module load Nextflow
module load Miniconda3 # not needed if using Singularity module load Miniconda/4.6.7
``` ```
>NB: You will need an account to use the HPC cluster in order to run the pipeline. If in doubt contact the HPC team. >NB: You will need an account to use the HPC cluster in order to run the pipeline. If in doubt contact the HPC team.