58134cb929
* Initialise chromap module * Revert "Initialise chromap module" This reverts commit 47c67ae231a6f221ef5b9b7b444b583b5406852b. * Remake chromap base files with new layout * Copy chromap * Copy index * Add compression * Update padding * Update container * Update chromap input test data * Add chromap chromap tests * Add padding * Update comment * update yaml file * Remove TODOs * Add fasta input to yaml * Update YAML * Remove comment, update container * Remove comments * Import Chromap index * Update test.yml * Fix read input * Update test.yml * Add bcftools/concat module. (#641) * draft for bcftools modules [ci skip] * initial test for bcftools concat * Update the params for testing * fix tests * Accomodate code review [ci skip] Co-authored-by: James A. Fellows Yates <jfy133@gmail.com> * Update the meta file and open PR for review * Update the keyword * Update the tags for module [ci skip[ * add threads Co-authored-by: James A. Fellows Yates <jfy133@gmail.com> * add module for dragonflye (#633) * add module for dragonflye * fix tests for dragonflye * Update test.yml * Update meta.yml * Update main.nf * Update main.nf * Update modules/dragonflye/meta.yml Co-authored-by: Gregor Sturm <mail@gregor-sturm.de> * update typos. change quote from ' to ". (#652) * Add bcftools/norm module (#655) * Initial draft [ci skip] * trigger first test * update output file path * Tests passing * finishing touches for meta.yml and update checksum * tweak checksum * add threads to the module * skip version info for matching test md5sum [ci skip] * Add ref fasta and finalize the module Co-authored-by: Gregor Sturm <mail@gregor-sturm.de> * Expansionhunter (#666) Please enter the commit message for your changes. Lines starting * adds expansionhunter module Co-authored-by: Maxime U. Garcia <maxime.garcia@scilifelab.se> * Update test.yml (#668) * Specify in guidelines one should split CPUs when module has n > 1 tool (#660) * Specify more guidelines on input channels * Linting * Updates based on code review * Update README.md * Fix broken sentence * Describe CPU splitting * Update README.md Co-authored-by: Gregor Sturm <mail@gregor-sturm.de> * More CPU examples Co-authored-by: Gregor Sturm <mail@gregor-sturm.de> * Add dsh-bio export-segments module (#631) Co-authored-by: Gregor Sturm <mail@gregor-sturm.de> * update: `BWA/ALN` (#653) * Specify more guidelines on input channels * Linting * Updates based on code review * Update README.md * Fix broken sentence * Remove reads from output channel following module guidelines. Should do a .join() based on $meta, to reassociate. Co-authored-by: Gregor Sturm <mail@gregor-sturm.de> * Update seqwish reported version to match bioconda version. (#678) * Bbmap index (#683) BBMap index module * Initialise chromap module * Revert "Initialise chromap module" This reverts commit 47c67ae231a6f221ef5b9b7b444b583b5406852b. * Remove unnecessary files * Remove unnecessary files * Update modules/chromap/index/main.nf Co-authored-by: Harshil Patel <drpatelh@users.noreply.github.com> * Update modules/chromap/index/main.nf Co-authored-by: Harshil Patel <drpatelh@users.noreply.github.com> * Update modules/chromap/chromap/main.nf Co-authored-by: Harshil Patel <drpatelh@users.noreply.github.com> * Update tests/modules/chromap/chromap/main.nf Co-authored-by: Harshil Patel <drpatelh@users.noreply.github.com> * Update tests/modules/chromap/chromap/main.nf Co-authored-by: Harshil Patel <drpatelh@users.noreply.github.com> * Update tests/modules/chromap/chromap/main.nf Co-authored-by: Harshil Patel <drpatelh@users.noreply.github.com> * Update modules/chromap/index/main.nf Co-authored-by: Harshil Patel <drpatelh@users.noreply.github.com> * Remove pytest_software.yml * Apply suggestions from code review Co-authored-by: Abhinav Sharma <abhi18av@users.noreply.github.com> Co-authored-by: James A. Fellows Yates <jfy133@gmail.com> Co-authored-by: Robert A. Petit III <robbie.petit@gmail.com> Co-authored-by: Gregor Sturm <mail@gregor-sturm.de> Co-authored-by: JIANHONG OU <jianhong@users.noreply.github.com> Co-authored-by: Anders Jemt <jemten@users.noreply.github.com> Co-authored-by: Maxime U. Garcia <maxime.garcia@scilifelab.se> Co-authored-by: Michael L Heuer <heuermh@acm.org> Co-authored-by: Daniel Lundin <daniel.lundin@lnu.se> Co-authored-by: Harshil Patel <drpatelh@users.noreply.github.com> |
||
---|---|---|
.github | ||
docs/images | ||
modules | ||
tests | ||
.editorconfig | ||
.gitattributes | ||
.gitignore | ||
.markdownlint.yml | ||
.nf-core.yml | ||
LICENSE | ||
README.md |
THIS REPOSITORY IS UNDER ACTIVE DEVELOPMENT. SYNTAX, ORGANISATION AND LAYOUT MAY CHANGE WITHOUT NOTICE! PLEASE BE KIND TO OUR CODE REVIEWERS AND SUBMIT ONE PULL REQUEST PER MODULE :)
A repository for hosting Nextflow DSL2 module files containing tool-specific process definitions and their associated documentation.
Table of contents
Using existing modules
The module files hosted in this repository define a set of processes for software tools such as fastqc
, bwa
, samtools
etc. This allows you to share and add common functionality across multiple pipelines in a modular fashion.
We have written a helper command in the nf-core/tools
package that uses the GitHub API to obtain the relevant information for the module files present in the modules/
directory of this repository. This includes using git
commit hashes to track changes for reproducibility purposes, and to download and install all of the relevant module files.
-
Install the latest version of
nf-core/tools
(>=2.0
) -
List the available modules:
$ nf-core modules list remote ,--./,-. ___ __ __ __ ___ /,-._.--~\ |\ | |__ __ / ` / \ |__) |__ } { | \| | \__, \__/ | \ |___ \`-._,-`-, `._,._,' nf-core/tools version 2.0 INFO Modules available from nf-core/modules (master): pipeline_modules.py:164 ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Module Name ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ bandage/image │ │ bcftools/consensus │ │ bcftools/filter │ │ bcftools/isec │ ..truncated..
-
Install the module in your pipeline directory:
$ nf-core modules install fastqc ,--./,-. ___ __ __ __ ___ /,-._.--~\ |\ | |__ __ / ` / \ |__) |__ } { | \| | \__, \__/ | \ |___ \`-._,-`-, `._,._,' nf-core/tools version 2.0 INFO Installing fastqc pipeline_modules.py:213 INFO Downloaded 3 files to ./modules/nf-core/modules/fastqc pipeline_modules.py:236
-
Import the module in your Nextflow script:
#!/usr/bin/env nextflow nextflow.enable.dsl = 2 include { FASTQC } from './modules/nf-core/modules/fastqc/main' addParams( options: [:] )
-
Remove the module from the pipeline repository if required:
$ nf-core modules remove fastqc ,--./,-. ___ __ __ __ ___ /,-._.--~\ |\ | |__ __ / ` / \ |__) |__ } { | \| | \__, \__/ | \ |___ \`-._,-`-, `._,._,' nf-core/tools version 2.0 INFO Removing fastqc pipeline_modules.py:271 INFO Successfully removed fastqc pipeline_modules.py:285
-
Check that a locally installed nf-core module is up-to-date compared to the one hosted in this repo:
$ nf-core modules lint fastqc ,--./,-. ___ __ __ __ ___ /,-._.--~\ |\ | |__ __ / ` / \ |__) |__ } { | \| | \__, \__/ | \ |___ \`-._,-`-, `._,._,' nf-core/tools version 2.0 INFO Linting pipeline: . lint.py:104 INFO Linting module: fastqc lint.py:106 ╭─────────────────────────────────────────────────────────────────────────────────╮ │ [!] 1 Test Warning │ ╰─────────────────────────────────────────────────────────────────────────────────╯ ╭──────────────┬───────────────────────────────┬──────────────────────────────────╮ │ Module name │ Test message │ File path │ ├──────────────┼───────────────────────────────┼──────────────────────────────────┤ │ fastqc │ Local copy of module outdated │ modules/nf-core/modules/fastqc/ │ ╰──────────────┴────────────────────────────── ┴──────────────────────────────────╯ ╭──────────────────────╮ │ LINT RESULTS SUMMARY │ ├──────────────────────┤ │ [✔] 15 Tests Passed │ │ [!] 1 Test Warning │ │ [✗] 0 Test Failed │ ╰──────────────────────╯
We have plans to add other utility commands to help developers install and maintain modules downloaded from this repository so watch this space e.g. nf-core modules update
command to automatically check and update modules installed within the pipeline.
Adding a new module file
If you decide to upload a module to nf-core/modules
then this will
ensure that it will become available to all nf-core pipelines,
and to everyone within the Nextflow community! See
modules/
for examples.
Checklist
Please check that the module you wish to add isn't already on nf-core/modules
:
- Use the
nf-core modules list
command - Check open pull requests
- Search open issues
If the module doesn't exist on nf-core/modules
:
- Please create a new issue before adding it
- Set an appropriate subject for the issue e.g.
new module: fastqc
- Add yourself to the
Assignees
so we can track who is working on the module
nf-core modules create
We have implemented a number of commands in the nf-core/tools
package to make it incredibly easy for you to create and contribute your own modules to nf-core/modules.
-
Install the latest version of
nf-core/tools
(>=2.0
) -
Install
Nextflow
(>=21.04.0
) -
Install any of
Docker
,Singularity
orConda
-
Set up git by adding a new remote of the nf-core git repo called
upstream
git remote add upstream https://github.com/nf-core/modules.git
Make a new branch for your module and check it out
git checkout -b fastqc
-
Create a module using the nf-core DSL2 module template:
$ nf-core modules create . --tool fastqc --author @joebloggs --label process_low --meta ,--./,-. ___ __ __ __ ___ /,-._.--~\ |\ | |__ __ / ` / \ |__) |__ } { | \| | \__, \__/ | \ |___ \`-._,-`-, `._,._,' nf-core/tools version 2.0 INFO Using Bioconda package: 'bioconda::fastqc=0.11.9' create.py:130 INFO Using Docker / Singularity container with tag: 'fastqc:0.11.9--0' create.py:140 INFO Created / edited following files: create.py:218 ./modules/fastqc/functions.nf ./modules/fastqc/main.nf ./modules/fastqc/meta.yml ./tests/modules/fastqc/main.nf ./tests/modules/fastqc/test.yml ./tests/config/pytest_modules.yml
All of the files required to add the module to
nf-core/modules
will be created/edited in the appropriate places. The 4 files you will need to change are:-
This is the main script containing the
process
definition for the module. You will see an extensive number ofTODO
statements to help guide you to fill in the appropriate sections and to ensure that you adhere to the guidelines we have set for module submissions. -
This file will be used to store general information about the module and author details - the majority of which will already be auto-filled. However, you will need to add a brief description of the files defined in the
input
andoutput
section of the main script since these will be unique to each module. -
./tests/modules/fastqc/main.nf
Every module MUST have a test workflow. This file will define one or more Nextflow
workflow
definitions that will be used to unit test the output files created by the module. By default, oneworkflow
definition will be added but please feel free to add as many as possible so we can ensure that the module works on different data types / parameters e.g. separateworkflow
for single-end and paired-end data.Minimal test data required for your module may already exist within this repository, in which case you may just have to change a couple of paths in this file - see the Test data section for more info and guidelines for adding new standardised data if required.
-
./tests/modules/fastqc/test.yml
This file will contain all of the details required to unit test the main script in the point above using pytest-workflow. If possible, any outputs produced by the test workflow(s) MUST be included and listed in this file along with an appropriate check e.g. md5sum. The different test options are listed in the pytest-workflow docs.
As highlighted in the next point, we have added a command to make it much easier to test the workflow(s) defined for the module and to automatically create the
test.yml
with the md5sum hashes for all of the outputs generated by the module.md5sum
checks are the preferable choice of test to determine file changes, however, this may not be possible for all outputs generated by some tools e.g. if they include time stamps or command-related headers. Please do your best to avoid just checking for the file being present e.g. it may still be possible to check that the file contains the appropriate text snippets.
-
-
Create a yaml file containing information required for module unit testing
$ nf-core modules create-test-yml ,--./,-. ___ __ __ __ ___ /,-._.--~\ |\ | |__ __ / ` / \ |__) |__ } { | \| | \__, \__/ | \ |___ \`-._,-`-, `._,._,' nf-core/tools version 2.0 INFO Press enter to use default values (shown in brackets) or type your own responses test_yml_builder.py:51 ? Tool name: fastqc Test YAML output path (- for stdout) (tests/modules/fastqc/test.yml): INFO Looking for test workflow entry points: 'tests/modules/fastqc/main.nf' test_yml_builder.py:116 INFO Building test meta for entry point 'test_fastqc_single_end' test_yml_builder.py:150 Test name (fastqc test_fastqc_single_end): Test command (nextflow run tests/modules/fastqc -entry test_fastqc_single_end -c tests/config/nextflow.config): Test tags (comma separated) (fastqc,fastqc_single_end): Test output folder with results (leave blank to run test): ? Choose software profile Singularity INFO Setting env var '$PROFILE' to 'singularity' test_yml_builder.py:258 INFO Running 'fastqc' test with command: test_yml_builder.py:263 nextflow run tests/modules/fastqc -entry test_fastqc_single_end -c tests/config/nextflow.config --outdir /tmp/tmpgbneftf5 INFO Test workflow finished! test_yml_builder.py:276 INFO Writing to 'tests/modules/fastqc/test.yml' test_yml_builder.py:293
NB: See docs for running tests manually if you would like to run the tests manually.
-
Lint the module locally to check that it adheres to nf-core guidelines before submission
$ nf-core modules lint . --tool fastqc ,--./,-. ___ __ __ __ ___ /,-._.--~\ |\ | |__ __ / ` / \ |__) |__ } { | \| | \__, \__/ | \ |___ \`-._,-`-, `._,._,' nf-core/tools version 2.0 INFO Linting modules repo: . lint.py:102 INFO Linting module: fastqc lint.py:106 ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ [!] 3 Test Warnings │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭──────────────┬──────────────────────────────────────────────────────────────┬──────────────────────────────────╮ │ Module name │ Test message │ File path │ ├──────────────┼──────────────────────────────────────────────────────────────┼──────────────────────────────────┤ │ fastqc │ TODO string in meta.yml: #Add a description of the module... │ modules/nf-core/modules/fastqc/ │ │ fastqc │ TODO string in meta.yml: #Add a description and other det... │ modules/nf-core/modules/fastqc/ │ │ fastqc │ TODO string in meta.yml: #Add a description of all of the... │ modules/nf-core/modules/fastqc/ │ ╰──────────────┴──────────────────────────────────────────────────────────────┴──────────────────────────────────╯ ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ [!] 1 Test Failed │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ╭──────────────┬──────────────────────────────────────────────────────────────┬──────────────────────────────────╮ │ Module name │ Test message │ File path │ ├──────────────┼──────────────────────────────────────────────────────────────┼──────────────────────────────────┤ │ fastqc │ 'meta' map not emitted in output channel(s) │ modules/nf-core/modules/fastqc/ │ ╰──────────────┴──────────────────────────────────────────────────────────────┴──────────────────────────────────╯ ╭──────────────────────╮ │ LINT RESULTS SUMMARY │ ├──────────────────────┤ │ [✔] 38 Tests Passed │ │ [!] 3 Test Warning │ │ [✗] 1 Test Failed │ ╰──────────────────────╯
-
Once ready, the code can be pushed and a pull request (PR) created
On a regular basis you can pull upstream changes into this branch and it is recommended to do so before pushing and creating a pull request - see below. Rather than merging changes directly from upstream the rebase strategy is recommended so that your changes are applied on top of the latest master branch from the nf-core repo. This can be performed as follows
git pull --rebase upstream master
Once you are ready you can push the code and create a PR
git push -u origin fastqc
Once the PR has been accepted you should delete the branch and checkout master again.
git checkout master git branch -d fastqc
In case there are commits on the local branch that didn't make it into the PR (usually commits made after the PR), git will warn about this and not delete the branch. If you are sure you want to delete, use the following command
git branch -D fastqc
Test data
In order to test that each module added to nf-core/modules
is actually working and to be able to track any changes to results files between module updates we have set-up a number of Github Actions CI tests to run each module on a minimal test dataset using Docker, Singularity and Conda.
-
All test data for
nf-core/modules
MUST be added to themodules
branch ofnf-core/test-datasets
and organised by filename extension. -
In order to keep the size of this repository as minimal as possible, pre-existing files from
nf-core/test-datasets
MUST be reused if at all possible. -
Test files MUST be kept as tiny as possible.
-
If the appropriate test data doesn't exist in the
modules
branch ofnf-core/test-datasets
please contact us on the nf-core Slack#modules
channel (you can join with this invite) to discuss possible options.
Running tests manually
As outlined in the nf-core modules create section we have made it quite trivial to create an initial yaml file (via the nf-core modules create-test-yml
command) containing a listing of all of the module output files and their associated md5sums. However, md5sum checks may not be appropriate for all output files if for example they contain timestamps. This is why it is a good idea to re-run the tests locally with pytest-workflow
before you create your pull request adding the module. If your files do indeed have timestamps or other issues that prevent you from using the md5sum check, then you can edit the test.yml
file to instead check that the file contains some specific content or as a last resort, if it exists. The different test options are listed in the pytest-workflow docs.
Please follow the steps below to run the tests locally:
-
Install
Nextflow
(>=21.04.0
) -
Install any of
Docker
,Singularity
orConda
-
Install
pytest-workflow
-
Start running your own tests using the appropriate
tag
defined in thetest.yml
:-
Typical command with Docker:
cd /path/to/git/clone/of/nf-core/modules/ PROFILE=docker pytest --tag fastqc --symlink --keep-workflow-wd
-
Typical command with Singularity:
cd /path/to/git/clone/of/nf-core/modules/ TMPDIR=~ PROFILE=singularity pytest --tag fastqc --symlink --keep-workflow-wd
-
Typical command with Conda:
cd /path/to/git/clone/of/nf-core/modules/ PROFILE=conda pytest --tag fastqc --symlink --keep-workflow-wd
-
See docs on running pytest-workflow for more info.
-
⚠️ if you have a module named
build
this can conflict with some pytest internal behaviour. This results in no tests being run (i.e. recieving a message ofcollected 0 items
). In this case rename thetests/<module>/build
directry totests/<module>/build_test
, and update the correspondingtest.yml
accordingly. An example can be seen with thebowtie2/build
module tests.
Uploading to nf-core/modules
Fork the nf-core/modules
repository to your own GitHub account. Within the local clone of your fork add the module file to the modules/
directory. Please try and keep PRs as atomic as possible to aid the reviewing process - ideally, one module addition/update per PR.
Commit and push these changes to your local clone on GitHub, and then create a pull request on the nf-core/modules
GitHub repo with the appropriate information.
We will be notified automatically when you have created your pull request, and providing that everything adheres to nf-core guidelines we will endeavour to approve your pull request as soon as possible.
Guidelines
The key words "MUST", "MUST NOT", "SHOULD", etc. are to be interpreted as described in RFC 2119.
General
-
All non-mandatory command-line tool options MUST be provided as a string i.e.
options.args
whereoptions
is a Groovy Map that MUST be provided via the NextflowaddParams
option when including the module viainclude
in the parent workflow. -
Software that can be piped together SHOULD be added to separate module files unless there is a run-time, storage advantage in implementing in this way. For example, using a combination of
bwa
andsamtools
to output a BAM file instead of a SAM file:bwa mem | samtools view -B -T ref.fasta
-
Where applicable, the usage and generation of compressed files SHOULD be enforced as input and output, respectively:
*.fastq.gz
and NOT*.fastq
*.bam
and NOT*.sam
-
Where applicable, each module command MUST emit a file
<SOFTWARE>.version.txt
containing a single line with the software's version in the format<VERSION_NUMBER>
or0.7.17
e.g.echo \$(bwa 2>&1) | sed 's/^.*Version: //; s/Contact:.*\$//' > ${software}.version.txt
If the software is unable to output a version number on the command-line then a variable called
VERSION
can be manually specified to create this file e.g. homer/annotatepeaks module. -
The process definition MUST NOT contain a
when
statement.
Naming conventions
-
The directory structure for the module name must be all lowercase e.g.
modules/bwa/mem/
. The name of the software (i.e.bwa
) and tool (i.e.mem
) MUST be all one word. -
The process name in the module file MUST be all uppercase e.g.
process BWA_MEM {
. The name of the software (i.e.BWA
) and tool (i.e.MEM
) MUST be all one word separated by an underscore. -
All parameter names MUST follow the
snake_case
convention. -
All function names MUST follow the
camelCase
convention.
Input/output options
-
Input channel declarations MUST be defined for all possible input files (i.e. both required and optional files).
- Directly associated auxiliary files to an input file MAY be defined within the same input channel alongside the main input channel (e.g. BAM and BAI).
- Other generic auxiliary files used across different input files (e.g. common reference sequences) MAY be defined using a dedicated input channel (e.g. reference files).
-
Named file extensions MUST be emitted for ALL output channels e.g.
path "*.txt", emit: txt
. -
Optional inputs are not currently supported by Nextflow. However, passing an empty list (
[]
) instead of a file as a module parameter can be used to work around this issue.
Module parameters
-
A module file SHOULD only define input and output files as command-line parameters to be executed within the process.
-
All
params
within the module MUST be initialised and used in the local context of the module. In other words, namedparams
defined in the parent workflow MUST NOT be assumed to be passed to the module to allow developers to call their parameters whatever they want. In general, it may be more suitable to use additionalinput
value channels to cater for such scenarios. -
If the tool supports multi-threading then you MUST provide the appropriate parameter using the Nextflow
task
variable e.g.--threads $task.cpus
. -
Any parameters that need to be evaluated in the context of a particular sample e.g. single-end/paired-end data MUST also be defined within the process.
Resource requirements
-
An appropriate resource
label
MUST be provided for the module as listed in the nf-core pipeline template e.g.process_low
,process_medium
orprocess_high
. -
If the tool supports multi-threading then you MUST provide the appropriate parameter using the Nextflow
task
variable e.g.--threads $task.cpus
. -
If a module contains multiple tools that supports multi-threading (e.g. piping output into a samtools command), you MUST assign cpus per tool such that the total number of used CPUs does not exceed
task.cpus
.- For example, combining two (or more) tools that both (all) have multi-threading, this can be assigned to the variable
split_cpus
- If one tool is multi-threaded and another uses a single thread, you can specify directly in the command itself e.g. with
${task.cpus - 1}
- For example, combining two (or more) tools that both (all) have multi-threading, this can be assigned to the variable
Software requirements
BioContainers is a registry of Docker and Singularity containers automatically created from all of the software packages on Bioconda. Where possible we will use BioContainers to fetch pre-built software containers and Bioconda to install software using Conda.
-
Software requirements SHOULD be declared within the module file using the Nextflow
container
directive. For single-tool BioContainers, thenf-core modules create
command will automatically fetch and fill-in the appropriate Conda / Docker / Singularity definitions by parsing the information provided in the first part of the module name:conda (params.enable_conda ? "bioconda::bwa=0.7.17" : null) // Conda package if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) { container "https://depot.galaxyproject.org/singularity/bwa:0.7.17--hed695b0_7" // Singularity image } else { container "quay.io/biocontainers/bwa:0.7.17--hed695b0_7" // Docker image }
-
If the software is available on Conda it MUST also be defined using the Nextflow
conda
directive. Usingbioconda::bwa=0.7.17
as an example, software MUST be pinned to the channel (i.e.bioconda
) and version (i.e.0.7.17
). Conda packages MUST not be pinned to a build because they can vary on different platforms. -
If required, multi-tool containers may also be available on BioContainers e.g.
bwa
andsamtools
. You can install and use thegalaxy-tool-util
package to search for both single- and multi-tool containers available in Conda, Docker and Singularity format. e.g. to search for Docker (hosted on Quay.io) and Singularity multi-tool containers with bothbowtie
andsamtools
installed you can use the following command:mulled-search --destination quay singularity --channel bioconda --search bowtie samtools | grep "mulled"
NB: Build information for all tools within a multi-tool container can be obtained in the
/usr/local/conda-meta/history
file within the container. -
It is also possible for a new multi-tool container to be built and added to BioContainers by submitting a pull request on their
multi-package-containers
repository.-
Fork the multi-package-containers repository
-
Make a change to the
hash.tsv
file in thecombinations
directory see here for an example wherepysam=0.16.0.1,biopython=1.78
was added. -
Commit the code and then make a pull request to the original repo, for example
-
Once the PR has been accepted a container will get built and you can find it using a search tool in the
galaxy-tool-util conda
packagemulled-search --destination quay singularity conda --search pysam biopython | grep "mulled" quay mulled-v2-3a59640f3fe1ed11819984087d31d68600200c3f 185a25ca79923df85b58f42deb48f5ac4481e91f-0 docker pull quay.io/biocontainers/mulled-v2-3a59640f3fe1ed11819984087d31d68600200c3f:185a25ca79923df85b58f42deb48f5ac4481e91f-0 singularity mulled-v2-3a59640f3fe1ed11819984087d31d68600200c3f 185a25ca79923df85b58f42deb48f5ac4481e91f-0 wget https://depot.galaxyproject.org/singularity/mulled-v2-3a59640f3fe1ed11819984087d31d68600200c3f:185a25ca79923df85b58f42deb48f5ac4481e91f-0
-
You can copy and paste the
mulled-*
path into the relevant Docker and Singularity lines in the Nextflowprocess
definition of your module -
To confirm that this is correct. Spin up a temporary Docker container
docker run --rm -it quay.io/biocontainers/mulled-v2-3a59640f3fe1ed11819984087d31d68600200c3f:185a25ca79923df85b58f42deb48f5ac4481e91f-0 /bin/sh
And in the command prompt type
$ grep specs /usr/local/conda-meta/history # update specs: ['biopython=1.78', 'pysam=0.16.0.1']
The packages should reflect those added to the multi-package-containers repo
hash.tsv
file
-
-
If the software is not available on Bioconda a
Dockerfile
MUST be provided within the module directory. We will use GitHub Actions to auto-build the containers on the GitHub Packages registry.
Publishing results
The Nextflow publishDir
definition is currently quite limited in terms of parameter/option evaluation. To overcome this, the publishing logic we have implemented for use with DSL2 modules attempts to minimise changing the publishDir
directive (default: params.outdir
) in favour of constructing and appending the appropriate output directory paths via the saveAs:
statement e.g.
publishDir "${params.outdir}",
mode: params.publish_dir_mode,
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
The saveFiles
function can be found in the functions.nf
file of utility functions that will be copied into all module directories. It uses the various publishing options
specified as input to the module to construct and append the relevant output path to params.outdir
.
We also use a standardised parameter called params.publish_dir_mode
that can be used to alter the file publishing method (default: copy
).
Terminology
The features offered by Nextflow DSL2 can be used in various ways depending on the granularity with which you would like to write pipelines. Please see the listing below for the hierarchy and associated terminology we have decided to use when referring to DSL2 components:
-
Module: A
process
that can be used within different pipelines and is as atomic as possible i.e. cannot be split into another module. An example of this would be a module file containing the process definition for a single tool such asFastQC
. At present, this repository has been created to only host atomic module files that should be added to themodules/
directory along with the required documentation and tests. -
Sub-workflow: A chain of multiple modules that offer a higher-level of functionality within the context of a pipeline. For example, a sub-workflow to run multiple QC tools with FastQ files as input. Sub-workflows should be shipped with the pipeline implementation and if required they should be shared amongst different pipelines directly from there. As it stands, this repository will not host sub-workflows although this may change in the future since well-written sub-workflows will be the most powerful aspect of DSL2.
-
Workflow: What DSL1 users would consider an end-to-end pipeline. For example, from one or more inputs to a series of outputs. This can either be implemented using a large monolithic script as with DSL1, or by using a combination of DSL2 individual modules and sub-workflows.
Help
For further information or help, don't hesitate to get in touch on Slack #modules
channel (you can join with this invite).
Citation
If you use the module files in this repository for your analysis please you can cite the nf-core
publication as follows:
The nf-core framework for community-curated bioinformatics pipelines.
Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.