Repository to host tool-specific module files for the Nextflow DSL2 community!
Find a file
2020-07-15 15:48:09 +01:00
.github/workflows Fix markdownlint 2020-07-14 12:21:33 +02:00
docs/images Move social preview image 2020-07-11 14:28:58 +02:00
software Adding back tool specific input output and index files 2020-07-15 15:46:45 +01:00
tests Removing old subsampled SRR files 2020-07-15 15:48:09 +01:00
.editorconfig Update .editorconfig 2020-06-15 16:47:42 +02:00
.gitattributes Fill out repo 2019-07-26 10:19:07 +01:00
.gitignore Add tests for fastqc module 2020-07-15 09:48:14 +02:00
.gitmodules Fix editor config styles 2020-07-11 13:42:13 +02:00
.markdownlint.yml Fix markdownlint 2020-07-14 12:21:33 +02:00
LICENSE Fill out repo 2019-07-26 10:19:07 +01:00
README.md Update README.md 2020-07-15 13:29:21 +02:00
test_import.nf Added empty test_import.nf file 2019-12-06 10:38:57 +01:00

nf-core/modules

DSL2 IS AN EXPERIMENTAL FEATURE UNDER DEVELOPMENT. SYNTAX, ORGANISATION AND LAYOUT OF THIS REPOSITORY MAY CHANGE IN THE NEAR FUTURE!

A repository for hosting nextflow DSL2 module files containing tool-specific process definitions and their associated documentation.

Table of contents

Terminology

The features offered by Nextflow DSL 2 can be used in various ways depending on the granularity with which you would like to write pipelines. Please see the listing below for the hierarchy and associated terminology we have decided to use when referring to DSL 2 components:

  • Module: A processthat can be used within different pipelines and is as atomic as possible i.e. cannot be split into another module. An example of this would be a module file containing the process definition for a single tool such as FastQC. This repository has been created to only host atomic module files that should be added to the tools sub-directory along with the required documentation, software and tests.
  • Sub-workflow: A chain of multiple modules that offer a higher-level of functionality within the context of a pipeline. For example, a sub-workflow to run multiple QC tools with FastQ files as input. Sub-workflows should be shipped with the pipeline implementation and if required they should be shared amongst different pipelines directly from there. As it stands, this repository will not host sub-workflows.
  • Workflow: What DSL 1 users would consider an end-to-end pipeline. For example, from one or more inputs to a series of outputs. This can either be implemented using a large monolithic script as with DSL 1, or by using a combination of DSL 2 individual modules and sub-workflows.

Using existing modules

The Nextflow include statement can be used within your pipelines in order to load module files that you have available locally.

You should be able to get a good idea as to how other people are using module files by looking at pipelines available in nf-core e.g. nf-core/chipseq (work in progress)

Configuration and parameters

The module files hosted in this repository define a set of processes for software tools such as fastqc, trimgalore, bwa etc. This allows you to share and add common functionality across multiple pipelines in a modular fashion.

The definition and standards for module files are still under discussion amongst the community but hopefully, a description should be added here soon!

Offline usage

If you want to use an existing module file available in nf-core/modules, and you're running on a system that has no internet connection, you'll need to download the repository (e.g. git clone https://github.com/nf-core/modules.git) and place it in a location that is visible to the file system on which you are running the pipeline. Then run the pipeline by creating a custom config file called e.g. custom_module.conf containing the following information:

include /path/to/downloaded/modules/directory/

Then you can run the pipeline by directly passing the additional config file with the -c parameter:

nextflow run /path/to/pipeline/ -c /path/to/custom_module.conf

Note that the nf-core/tools helper package has a download command to download all required pipeline files + singularity containers + institutional configs + modules in one go for you, to make this process easier.

Adding a new module file

If you decide to upload your module file to nf-core/modules then this will ensure that it will be automatically downloaded, and available at run-time to all nf-core pipelines, and to everyone within the Nextflow community! See nf-core/modules/software for examples.

The definition and standards for module files are still under discussion amongst the community. Currently the following points have been agreed on:

The key words "MUST", "MUST NOT", "SHOULD", etc. are to be interpreted as described in RFC 2119.

Defining inputs, outputs and parameters

  • A module file SHOULD only define inputs and outputs as parameters. Additionally,
    • it MUST define threads or resources where required for a particular process using task.cpus
    • it MUST be possible to pass additional parameters to the tool as a command line string via the params.<MODULE>_args parameter.
    • All NGS modules MUST accept a triplet [name, single_end, reads] as input. The single-end boolean values MUST be specified through the input channel and not inferred from the data e.g. here.
  • Process names MUST be all uppercase.
  • Each process MUST emit a file <TOOL>.version.txt containing a single line with the software's version in the format v<VERSION_NUMBER>.
  • All outputs MUST be named using emit.

Atomicity

  • Software that can be piped together SHOULD be added to separate module files unless there is an run-time, storage advantage in implementing in this way e.g. bwa mem | samtools view -C -T ref.fasta to output CRAM instead of SAM.

Publishing results

  • The module MUST accept the parameters params.out_dir and params.publish_dir and MUST publish results into ${params.out_dir}/${params.publish_dir}.

  • The publishDirMode MUST be configurable via params.publish_dir_mode

  • The module MUST accept a parameter params.publish_results accepting at least

    • "none", to publish no files at all, and
    • "default", to publish a sensible selection of files.

    It MAY accept further options.

  • To ensure consistent naming, files SHOULD be renamed according to the $name variable before returning them.

Testing

  • Every module MUST be tested by adding a test workflow with a toy dataset.
  • Test data MUST be stored within this repo. It is RECOMMENDED to re-use generic files from tests/data by symlinking them into the test directory of the module. Specific files MUST be added to the test-directory directly. Test files MUST be kept as tiny as possible.

Software requirements

  • Software requirements SHOULD be declared in a conda environment.yml file, including exact version numbers. Additionally, there MUST be a Dockerfile that containerizes the environment, or packages the software if conda is not available.

File formats

  • Wherever possible, CRAM files SHOULD be used over BAM files.
  • Wherever possible, FASTQ files SHOULD be compressed using gzip.

Documentation

Please add some documentation to the top of the module file in the form of native Nextflow comments. This has to be specified in a particular format as you will be able to see from other examples in the nf-core/modules/nf directory.

Uploading to nf-core/modules

Fork the nf-core/modules repository to your own GitHub account. Within the local clone of your fork add the module file to the nf-core/modules/software directory. Please keep the naming consistent between the module and documentation files e.g. bwa.nf and bwa.md, respectively.

Commit and push these changes to your local clone on GitHub, and then create a pull request on nf-core/modules GitHub repo with the appropriate information.

We will be notified automatically when you have created your pull request, and providing that everything adheres to nf-core guidelines we will endeavour to approve your pull request as soon as possible.

Help

If you have any questions or issues please send us a message on Slack.