mirror of
https://github.com/MillironX/nf-core_modules.git
synced 2024-12-25 20:08:17 +00:00
Merge remote-tracking branch 'upstream/master'
This commit is contained in:
commit
793df9ece0
602 changed files with 11035 additions and 146044 deletions
26
.github/ISSUE_TEMPLATE/new_module.md
vendored
Normal file
26
.github/ISSUE_TEMPLATE/new_module.md
vendored
Normal file
|
@ -0,0 +1,26 @@
|
|||
---
|
||||
name: New module
|
||||
about: Suggest a new module for nf-core/modules
|
||||
title: "new module: TOOL/SUBTOOL"
|
||||
label: new module
|
||||
---
|
||||
|
||||
<!--
|
||||
# nf-core/modules new module suggestion
|
||||
|
||||
Hi there!
|
||||
|
||||
Thanks for suggesting a new module for the modules!
|
||||
Please delete this text and anything that's not relevant from the template below:
|
||||
|
||||
Replace TOOL with the bioconda name for the tool in the following text, so that the link is functional.
|
||||
|
||||
Replace TOOL/SUBTOOL in the issue title so that it's understandable.
|
||||
-->
|
||||
|
||||
I think it would be good to have a module for [TOOL](https://bioconda.github.io/recipes/TOOL/README.html)
|
||||
|
||||
- [ ] This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
|
||||
- [ ] There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
|
||||
- [ ] There is no [open issue](https://github.com/nf-core/modules/issues) for this module
|
||||
- [ ] If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
|
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
|
@ -13,6 +13,8 @@ Learn more about contributing: [CONTRIBUTING.md](https://github.com/nf-core/modu
|
|||
|
||||
## PR checklist
|
||||
|
||||
Closes #XXX <!-- If this PR fixes an issue, please link it here! -->
|
||||
|
||||
- [ ] This comment contains a description of changes (with reason).
|
||||
- [ ] If you've fixed a bug or added code that should be tested, add tests!
|
||||
- [ ] If you've added a new tool - have you followed the module conventions in the [contribution docs](https://github.com/nf-core/modules/tree/master/.github/CONTRIBUTING.md)
|
||||
|
|
3
.github/workflows/pytest-workflow.yml
vendored
3
.github/workflows/pytest-workflow.yml
vendored
|
@ -23,7 +23,7 @@ jobs:
|
|||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
nxf_version: ['20.11.0-edge']
|
||||
nxf_version: ['20.11.0-edge', '21.03.0-edge']
|
||||
tags: ['${{ fromJson(needs.changes.outputs.modules) }}']
|
||||
profile: ['docker', 'singularity', 'conda']
|
||||
env:
|
||||
|
@ -92,3 +92,4 @@ jobs:
|
|||
/home/runner/pytest_workflow_*/*/.nextflow.log
|
||||
/home/runner/pytest_workflow_*/*/log.out
|
||||
/home/runner/pytest_workflow_*/*/log.err
|
||||
/home/runner/pytest_workflow_*/*/work
|
||||
|
|
441
README.md
441
README.md
|
@ -20,12 +20,14 @@ A repository for hosting [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl
|
|||
|
||||
- [Using existing modules](#using-existing-modules)
|
||||
- [Adding a new module file](#adding-a-new-module-file)
|
||||
- [Module template](#module-template)
|
||||
- [Guidelines](#guidelines)
|
||||
- [Testing](#testing)
|
||||
- [Documentation](#documentation)
|
||||
- [Checklist](#checklist)
|
||||
- [nf-core modules create](#nf-core-modules-create)
|
||||
- [Test data](#test-data)
|
||||
- [Running tests manually](#running-tests-manually)
|
||||
- [Uploading to `nf-core/modules`](#uploading-to-nf-coremodules)
|
||||
- [Guidelines](#guidelines)
|
||||
- [Terminology](#terminology)
|
||||
- [Nextflow edge releases](#nextflow-edge-releases)
|
||||
- [Help](#help)
|
||||
- [Citation](#citation)
|
||||
|
||||
|
@ -35,7 +37,7 @@ The module files hosted in this repository define a set of processes for softwar
|
|||
|
||||
We have written a helper command in the `nf-core/tools` package that uses the GitHub API to obtain the relevant information for the module files present in the [`software/`](software/) directory of this repository. This includes using `git` commit hashes to track changes for reproducibility purposes, and to download and install all of the relevant module files.
|
||||
|
||||
1. [Install](https://github.com/nf-core/tools#installation) the latest version of `nf-core/tools` (`>=1.10.2`)
|
||||
1. Install the latest version of [`nf-core/tools`](https://github.com/nf-core/tools#installation) (`>=1.13`)
|
||||
2. List the available modules:
|
||||
|
||||
```console
|
||||
|
@ -47,26 +49,24 @@ We have written a helper command in the `nf-core/tools` package that uses the Gi
|
|||
| \| | \__, \__/ | \ |___ \`-._,-`-,
|
||||
`._,._,'
|
||||
|
||||
nf-core/tools version 1.10.2
|
||||
nf-core/tools version 1.13
|
||||
|
||||
INFO Modules available from nf-core/modules (master): pipeline_modules.py:164
|
||||
|
||||
|
||||
INFO Modules available from nf-core/modules (master): modules.py:51
|
||||
|
||||
bwa/index
|
||||
bwa/mem
|
||||
deeptools/computematrix
|
||||
deeptools/plotfingerprint
|
||||
deeptools/plotheatmap
|
||||
deeptools/plotprofile
|
||||
fastqc
|
||||
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
|
||||
┃ Module Name ┃
|
||||
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
|
||||
│ bandage/image │
|
||||
│ bcftools/consensus │
|
||||
│ bcftools/filter │
|
||||
│ bcftools/isec │
|
||||
..truncated..
|
||||
```
|
||||
|
||||
3. Install the module in your pipeline directory:
|
||||
|
||||
```console
|
||||
$ nf-core modules install . fastqc
|
||||
$ nf-core modules install . --tool fastqc
|
||||
|
||||
,--./,-.
|
||||
___ __ __ __ ___ /,-._.--~\
|
||||
|
@ -74,12 +74,10 @@ We have written a helper command in the `nf-core/tools` package that uses the Gi
|
|||
| \| | \__, \__/ | \ |___ \`-._,-`-,
|
||||
`._,._,'
|
||||
|
||||
nf-core/tools version 1.10.2
|
||||
nf-core/tools version 1.13
|
||||
|
||||
|
||||
|
||||
INFO Installing fastqc modules.py:62
|
||||
INFO Downloaded 3 files to ./modules/nf-core/software/fastqc modules.py:97
|
||||
INFO Installing fastqc pipeline_modules.py:213
|
||||
INFO Downloaded 3 files to ./modules/nf-core/software/fastqc pipeline_modules.py:236
|
||||
```
|
||||
|
||||
4. Import the module in your Nextflow script:
|
||||
|
@ -92,25 +90,59 @@ We have written a helper command in the `nf-core/tools` package that uses the Gi
|
|||
include { FASTQC } from './modules/nf-core/software/fastqc/main' addParams( options: [:] )
|
||||
```
|
||||
|
||||
5. We have plans to add other utility commands to help developers install and maintain modules downloaded from this repository so watch this space!
|
||||
5. Remove the module from the pipeline repository if required:
|
||||
|
||||
```console
|
||||
$ nf-core modules --help
|
||||
$ nf-core modules remove . --tool fastqc
|
||||
|
||||
...truncated...
|
||||
,--./,-.
|
||||
___ __ __ __ ___ /,-._.--~\
|
||||
|\ | |__ __ / ` / \ |__) |__ } {
|
||||
| \| | \__, \__/ | \ |___ \`-._,-`-,
|
||||
`._,._,'
|
||||
|
||||
Commands:
|
||||
list List available software modules.
|
||||
install Add a DSL2 software wrapper module to a pipeline.
|
||||
update Update one or all software wrapper modules. (NOT YET IMPLEMENTED)
|
||||
remove Remove a software wrapper from a pipeline. (NOT YET IMPLEMENTED)
|
||||
check Check that imported module code has not been modified. (NOT YET IMPLEMENTED)
|
||||
nf-core/tools version 1.13
|
||||
|
||||
INFO Removing fastqc pipeline_modules.py:271
|
||||
INFO Successfully removed fastqc pipeline_modules.py:285
|
||||
```
|
||||
|
||||
## Adding a new module file
|
||||
6. Check that a locally installed nf-core module is up-to-date compared to the one hosted in this repo:
|
||||
|
||||
> **NB:** The definition and standards for module files are still under discussion
|
||||
but we are now gladly accepting submissions :)
|
||||
```console
|
||||
$ nf-core modules lint . --tool fastqc
|
||||
|
||||
,--./,-.
|
||||
___ __ __ __ ___ /,-._.--~\
|
||||
|\ | |__ __ / ` / \ |__) |__ } {
|
||||
| \| | \__, \__/ | \ |___ \`-._,-`-,
|
||||
`._,._,'
|
||||
|
||||
nf-core/tools version 1.13
|
||||
|
||||
INFO Linting pipeline: . lint.py:104
|
||||
INFO Linting module: fastqc lint.py:106
|
||||
|
||||
╭─────────────────────────────────────────────────────────────────────────────────╮
|
||||
│ [!] 1 Test Warning │
|
||||
╰─────────────────────────────────────────────────────────────────────────────────╯
|
||||
╭──────────────┬───────────────────────────────┬──────────────────────────────────╮
|
||||
│ Module name │ Test message │ File path │
|
||||
├──────────────┼───────────────────────────────┼──────────────────────────────────┤
|
||||
│ fastqc │ Local copy of module outdated │ modules/nf-core/software/fastqc/ │
|
||||
╰──────────────┴────────────────────────────── ┴──────────────────────────────────╯
|
||||
╭──────────────────────╮
|
||||
│ LINT RESULTS SUMMARY │
|
||||
├──────────────────────┤
|
||||
│ [✔] 15 Tests Passed │
|
||||
│ [!] 1 Test Warning │
|
||||
│ [✗] 0 Test Failed │
|
||||
╰──────────────────────╯
|
||||
```
|
||||
|
||||
We have plans to add other utility commands to help developers install and maintain modules downloaded from this repository so watch this space e.g. `nf-core modules update` command to automatically check and update modules installed within the pipeline.
|
||||
|
||||
## Adding a new module file
|
||||
|
||||
If you decide to upload a module to `nf-core/modules` then this will
|
||||
ensure that it will become available to all nf-core pipelines,
|
||||
|
@ -118,26 +150,205 @@ and to everyone within the Nextflow community! See
|
|||
[`software/`](software)
|
||||
for examples.
|
||||
|
||||
### Module template
|
||||
### Checklist
|
||||
|
||||
We have added a directory called [`software/TOOL/SUBTOOL/`](software/TOOL/SUBTOOL/) that serves as a template with which to create your own module and [`tests/software/TOOL/SUBTOOL/`](tests/software/TOOL/SUBTOOL/) as an example of how to add the required CI tests. Where applicable, we have added extensive `TODO` statements for general information, to help guide you as to where to make the appropriate changes, and how to make them. If in doubt, have a look at how we have done things for other modules.
|
||||
Please check that the module you wish to add isn't already on [`nf-core/modules`](https://github.com/nf-core/modules/tree/master/software):
|
||||
- Use the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
|
||||
- Check [open pull requests](https://github.com/nf-core/modules/pulls)
|
||||
- Search [open issues](https://github.com/nf-core/modules/issues)
|
||||
|
||||
If the module doesn't exist on `nf-core/modules`:
|
||||
- Please create a [new issue](https://github.com/nf-core/modules/issues/new?assignees=&labels=new%20module&template=new_nodule.md&title=new%20module:) before adding it
|
||||
- Set an appropriate subject for the issue e.g. `new module: fastqc`
|
||||
- Add yourself to the `Assignees` so we can track who is working on the module
|
||||
|
||||
### nf-core modules create
|
||||
|
||||
We have implemented a number of commands in the `nf-core/tools` package to make it incredibly easy for you to create and contribute your own modules to nf-core/modules.
|
||||
|
||||
1. Install the latest version of [`nf-core/tools`](https://github.com/nf-core/tools#installation) (`>=1.13`)
|
||||
2. Install [`nextflow`](https://nf-co.re/usage/installation) (`>=20.11.0-edge`; see [Nextflow edge releases](#nextflow-edge-releases))
|
||||
3. Install any of [`Docker`](https://docs.docker.com/engine/installation/), [`Singularity`](https://www.sylabs.io/guides/3.0/user-guide/) or [`Conda`](https://conda.io/miniconda.html)
|
||||
4. [Fork and clone this repo locally](#uploading-to-nf-coremodules)
|
||||
5. Create a module using the [nf-core DSL2 module template](https://github.com/nf-core/tools/blob/master/nf_core/module-template/software/main.nf):
|
||||
|
||||
```console
|
||||
.
|
||||
├── software
|
||||
│ └── TOOL
|
||||
│ └── SUBTOOL
|
||||
│ ├── functions.nf ## Utility functions imported in main module script
|
||||
│ ├── main.nf ## Main module script
|
||||
│ └── meta.yml ## Documentation for module, input, output, params, author
|
||||
├── tests
|
||||
│ └── software
|
||||
│ └── TOOL
|
||||
│ └── SUBTOOL
|
||||
│ ├── main.nf ## Minimal workflow to test module
|
||||
│ └── test.yml ## Pytest-workflow test file
|
||||
$ nf-core modules create . --tool fastqc --author @joebloggs --label process_low --meta
|
||||
|
||||
,--./,-.
|
||||
___ __ __ __ ___ /,-._.--~\
|
||||
|\ | |__ __ / ` / \ |__) |__ } {
|
||||
| \| | \__, \__/ | \ |___ \`-._,-`-,
|
||||
`._,._,'
|
||||
|
||||
nf-core/tools version 1.13
|
||||
|
||||
INFO Using Bioconda package: 'bioconda::fastqc=0.11.9' create.py:130
|
||||
INFO Using Docker / Singularity container with tag: 'fastqc:0.11.9--0' create.py:140
|
||||
INFO Created / edited following files: create.py:218
|
||||
./software/fastqc/functions.nf
|
||||
./software/fastqc/main.nf
|
||||
./software/fastqc/meta.yml
|
||||
./tests/software/fastqc/main.nf
|
||||
./tests/software/fastqc/test.yml
|
||||
./tests/config/pytest_software.yml
|
||||
```
|
||||
|
||||
All of the files required to add the module to `nf-core/modules` will be created/edited in the appropriate places. The 4 files you will need to change are:
|
||||
|
||||
1. [`./software/fastqc/main.nf`](https://github.com/nf-core/modules/blob/master/software/fastqc/main.nf)
|
||||
|
||||
This is the main script containing the `process` definition for the module. You will see an extensive number of `TODO` statements to help guide you to fill in the appropriate sections and to ensure that you adhere to the guidelines we have set for module submissions.
|
||||
|
||||
2. [`./software/fastqc/meta.yml`](https://github.com/nf-core/modules/blob/master/software/fastqc/meta.yml)
|
||||
|
||||
This file will be used to store general information about the module and author details - the majority of which will already be auto-filled. However, you will need to add a brief description of the files defined in the `input` and `output` section of the main script since these will be unique to each module.
|
||||
|
||||
3. [`./tests/software/fastqc/main.nf`](https://github.com/nf-core/modules/blob/master/tests/software/fastqc/main.nf)
|
||||
|
||||
Every module MUST have a test workflow. This file will define one or more Nextflow `workflow` definitions that will be used to unit test the output files created by the module. By default, one `workflow` definition will be added but please feel free to add as many as possible so we can ensure that the module works on different data types / parameters e.g. separate `workflow` for single-end and paired-end data.
|
||||
|
||||
Minimal test data required for your module may already exist within this repository, in which case you may just have to change a couple of paths in this file - see the [Test data](#test-data) section for more info and guidelines for adding new standardised data if required.
|
||||
|
||||
4. [`./tests/software/fastqc/test.yml`](https://github.com/nf-core/modules/blob/master/tests/software/fastqc/test.yml)
|
||||
|
||||
This file will contain all of the details required to unit test the main script in the point above using [pytest-workflow](https://pytest-workflow.readthedocs.io/). If possible, any outputs produced by the test workflow(s) MUST be included and listed in this file along with an appropriate check e.g. md5sum. The different test options are listed in the [pytest-workflow docs](https://pytest-workflow.readthedocs.io/en/stable/#test-options).
|
||||
|
||||
As highlighted in the next point, we have added a command to make it much easier to test the workflow(s) defined for the module and to automatically create the `test.yml` with the md5sum hashes for all of the outputs generated by the module.
|
||||
|
||||
`md5sum` checks are the preferable choice of test to determine file changes, however, this may not be possible for all outputs generated by some tools e.g. if they include time stamps or command-related headers. Please do your best to avoid just checking for the file being present e.g. it may still be possible to check that the file contains the appropriate text snippets.
|
||||
|
||||
6. Create a yaml file containing information required for module unit testing
|
||||
|
||||
```console
|
||||
$ nf-core modules create-test-yml
|
||||
|
||||
,--./,-.
|
||||
___ __ __ __ ___ /,-._.--~\
|
||||
|\ | |__ __ / ` / \ |__) |__ } {
|
||||
| \| | \__, \__/ | \ |___ \`-._,-`-,
|
||||
`._,._,'
|
||||
|
||||
nf-core/tools version 1.13
|
||||
|
||||
|
||||
INFO Press enter to use default values (shown in brackets) or type your own responses test_yml_builder.py:51
|
||||
? Tool name: fastqc
|
||||
Test YAML output path (- for stdout) (tests/software/fastqc/test.yml):
|
||||
INFO Looking for test workflow entry points: 'tests/software/fastqc/main.nf' test_yml_builder.py:116
|
||||
INFO Building test meta for entry point 'test_fastqc_single_end' test_yml_builder.py:150
|
||||
Test name (fastqc test_fastqc_single_end):
|
||||
Test command (nextflow run tests/software/fastqc -entry test_fastqc_single_end -c tests/config/nextflow.config):
|
||||
Test tags (comma separated) (fastqc,fastqc_single_end):
|
||||
Test output folder with results (leave blank to run test):
|
||||
? Choose software profile Singularity
|
||||
INFO Setting env var '$PROFILE' to 'singularity' test_yml_builder.py:258
|
||||
INFO Running 'fastqc' test with command: test_yml_builder.py:263
|
||||
nextflow run tests/software/fastqc -entry test_fastqc_single_end -c tests/config/nextflow.config --outdir /tmp/tmpgbneftf5
|
||||
INFO Test workflow finished! test_yml_builder.py:276
|
||||
INFO Writing to 'tests/software/fastqc/test.yml' test_yml_builder.py:293
|
||||
```
|
||||
|
||||
> NB: See docs for [running tests manually](#running-tests-manually) if you would like to run the tests manually.
|
||||
|
||||
7. Lint the module locally to check that it adheres to nf-core guidelines before submission
|
||||
|
||||
```console
|
||||
$ nf-core modules lint . --tool fastqc
|
||||
|
||||
,--./,-.
|
||||
___ __ __ __ ___ /,-._.--~\
|
||||
|\ | |__ __ / ` / \ |__) |__ } {
|
||||
| \| | \__, \__/ | \ |___ \`-._,-`-,
|
||||
`._,._,'
|
||||
|
||||
nf-core/tools version 1.13
|
||||
|
||||
INFO Linting modules repo: . lint.py:102
|
||||
INFO Linting module: fastqc lint.py:106
|
||||
|
||||
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
|
||||
│ [!] 3 Test Warnings │
|
||||
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
|
||||
╭──────────────┬──────────────────────────────────────────────────────────────┬──────────────────────────────────╮
|
||||
│ Module name │ Test message │ File path │
|
||||
├──────────────┼──────────────────────────────────────────────────────────────┼──────────────────────────────────┤
|
||||
│ fastqc │ TODO string in meta.yml: #Add a description of the module... │ modules/nf-core/software/fastqc/ │
|
||||
│ fastqc │ TODO string in meta.yml: #Add a description and other det... │ modules/nf-core/software/fastqc/ │
|
||||
│ fastqc │ TODO string in meta.yml: #Add a description of all of the... │ modules/nf-core/software/fastqc/ │
|
||||
╰──────────────┴──────────────────────────────────────────────────────────────┴──────────────────────────────────╯
|
||||
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
|
||||
│ [!] 1 Test Failed │
|
||||
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
|
||||
╭──────────────┬──────────────────────────────────────────────────────────────┬──────────────────────────────────╮
|
||||
│ Module name │ Test message │ File path │
|
||||
├──────────────┼──────────────────────────────────────────────────────────────┼──────────────────────────────────┤
|
||||
│ fastqc │ 'meta' map not emitted in output channel(s) │ modules/nf-core/software/fastqc/ │
|
||||
╰──────────────┴──────────────────────────────────────────────────────────────┴──────────────────────────────────╯
|
||||
╭──────────────────────╮
|
||||
│ LINT RESULTS SUMMARY │
|
||||
├──────────────────────┤
|
||||
│ [✔] 38 Tests Passed │
|
||||
│ [!] 3 Test Warning │
|
||||
│ [✗] 1 Test Failed │
|
||||
╰──────────────────────╯
|
||||
```
|
||||
|
||||
### Test data
|
||||
|
||||
In order to test that each module added to `nf-core/modules` is actually working and to be able to track any changes to results files between module updates we have set-up a number of Github Actions CI tests to run each module on a minimal test dataset using Docker, Singularity and Conda.
|
||||
|
||||
- All test data for `nf-core/modules` MUST be added to [`tests/data/`](tests/data/) and organised by filename extension.
|
||||
|
||||
- In order to keep the size of this repository as minimal as possible, pre-existing files from [`tests/data/`](tests/data/) MUST be reused if at all possible.
|
||||
|
||||
- Test files MUST be kept as tiny as possible.
|
||||
|
||||
### Running tests manually
|
||||
|
||||
As outlined in the [nf-core modules create](#nf-core-modules-create) section we have made it quite trivial to create an initial yaml file (via the `nf-core modules create-test-yml` command) containing a listing of all of the module output files and their associated md5sums. However, md5sum checks may not be appropriate for all output files if for example they contain timestamps. This is why it is a good idea to re-run the tests locally with `pytest-workflow` before you create your pull request adding the module. If your files do indeed have timestamps or other issues that prevent you from using the md5sum check, then you can edit the `test.yml` file to instead check that the file contains some specific content or as a last resort, if it exists. The different test options are listed in the [pytest-workflow docs](https://pytest-workflow.readthedocs.io/en/stable/#test-options).
|
||||
|
||||
Please follow the steps below to run the tests locally:
|
||||
|
||||
1. Install [`nextflow`](https://nf-co.re/usage/installation)
|
||||
|
||||
2. Install any of [`Docker`](https://docs.docker.com/engine/installation/), [`Singularity`](https://www.sylabs.io/guides/3.0/user-guide/) or [`Conda`](https://conda.io/miniconda.html)
|
||||
|
||||
3. Install [`pytest-workflow`](https://pytest-workflow.readthedocs.io/en/stable/#installation)
|
||||
|
||||
4. Start running your own tests using the appropriate [`tag`](https://github.com/nf-core/modules/blob/3d720a24fd3c766ba56edf3d4e108a1c45d353b2/tests/software/fastqc/test.yml#L3-L5) defined in the `test.yml`:
|
||||
|
||||
- Typical command with Docker:
|
||||
|
||||
```console
|
||||
cd /path/to/git/clone/of/nf-core/modules/
|
||||
PROFILE=docker pytest --tag fastqc_single_end --symlink --keep-workflow-wd
|
||||
```
|
||||
|
||||
- Typical command with Singularity:
|
||||
|
||||
```console
|
||||
cd /path/to/git/clone/of/nf-core/modules/
|
||||
TMPDIR=~ PROFILE=singularity pytest --tag fastqc_single_end --symlink --keep-workflow-wd
|
||||
```
|
||||
|
||||
- Typical command with Conda:
|
||||
|
||||
```console
|
||||
cd /path/to/git/clone/of/nf-core/modules/
|
||||
PROFILE=conda pytest --tag fastqc_single_end --symlink --keep-workflow-wd
|
||||
```
|
||||
|
||||
- See [docs on running pytest-workflow](https://pytest-workflow.readthedocs.io/en/stable/#running-pytest-workflow) for more info.
|
||||
|
||||
### Uploading to `nf-core/modules`
|
||||
|
||||
[Fork](https://help.github.com/articles/fork-a-repo/) the `nf-core/modules` repository to your own GitHub account. Within the local clone of your fork add the module file to the [`software/`](software) directory. Please try and keep PRs as atomic as possible to aid the reviewing process - ideally, one module addition/update per PR.
|
||||
|
||||
Commit and push these changes to your local clone on GitHub, and then [create a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/) on the `nf-core/modules` GitHub repo with the appropriate information.
|
||||
|
||||
We will be notified automatically when you have created your pull request, and providing that everything adheres to nf-core guidelines we will endeavour to approve your pull request as soon as possible.
|
||||
|
||||
### Guidelines
|
||||
|
||||
The key words "MUST", "MUST NOT", "SHOULD", etc. are to be interpreted as described in [RFC 2119](https://tools.ietf.org/html/rfc2119).
|
||||
|
@ -194,7 +405,7 @@ using a combination of `bwa` and `samtools` to output a BAM file instead of a SA
|
|||
|
||||
#### Resource requirements
|
||||
|
||||
- An appropriate resource `label` MUST be provided for the module as listed in the [nf-core pipeline template](https://github.com/nf-core/tools/blob/master/nf_core/pipeline-template/%7B%7Bcookiecutter.name_noslash%7D%7D/conf/base.config#L29) e.g. `process_low`, `process_medium` or `process_high`.
|
||||
- An appropriate resource `label` MUST be provided for the module as listed in the [nf-core pipeline template](https://github.com/nf-core/tools/blob/master/nf_core/pipeline-template/conf/base.config#L29-L46) e.g. `process_low`, `process_medium` or `process_high`.
|
||||
|
||||
- If the tool supports multi-threading then you MUST provide the appropriate parameter using the Nextflow `task` variable e.g. `--threads $task.cpus`.
|
||||
|
||||
|
@ -202,10 +413,10 @@ using a combination of `bwa` and `samtools` to output a BAM file instead of a SA
|
|||
|
||||
[BioContainers](https://biocontainers.pro/#/) is a registry of Docker and Singularity containers automatically created from all of the software packages on [Bioconda](https://bioconda.github.io/). Where possible we will use BioContainers to fetch pre-built software containers and Bioconda to install software using Conda.
|
||||
|
||||
- Software requirements SHOULD be declared within the module file using the Nextflow `container` directive. For single-tool BioContainers, the simplest method to obtain the Docker container path is to replace `bwa` with your tool name in this [Quay.io link](https://quay.io/repository/biocontainers/bwa?tab=tags). You will see a list of tags sorted by the most recent. You can then use exactly the same name (e.g. `bwa`) version (e.g. `0.7.17`) and tag (e.g. `hed695b0_7`) to add all of the Conda, Docker and Singularity definitions in the module.
|
||||
- Software requirements SHOULD be declared within the module file using the Nextflow `container` directive. For single-tool BioContainers, the `nf-core modules create` command will automatically fetch and fill-in the appropriate Conda / Docker / Singularity definitions by parsing the information provided in the first part of the module name:
|
||||
|
||||
```nextflow
|
||||
conda (params.enable_conda ? "bioconda::bwa=0.7.17=hed695b0_7" : null) // Conda package
|
||||
conda (params.enable_conda ? "bioconda::bwa=0.7.17" : null) // Conda package
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/bwa:0.7.17--hed695b0_7" // Singularity image
|
||||
} else {
|
||||
|
@ -213,7 +424,7 @@ using a combination of `bwa` and `samtools` to output a BAM file instead of a SA
|
|||
}
|
||||
```
|
||||
|
||||
- If the software is available on Conda it MUST also be defined using the Nextflow `conda` directive. Using `bioconda::bwa=0.7.17=hed695b0_7` as an example, software MUST be pinned to the channel (i.e. `bioconda`), version (i.e. `0.7.17`) and build (i.e. `hed695b0_7`). This allows us to perform file output integrity CI tests on the same input test data with Docker, Singularity and Conda.
|
||||
- If the software is available on Conda it MUST also be defined using the Nextflow `conda` directive. Using `bioconda::bwa=0.7.17` as an example, software MUST be pinned to the channel (i.e. `bioconda`) and version (i.e. `0.7.17`). Conda packages MUST not be pinned to a build because they can vary on different platforms.
|
||||
|
||||
- If required, multi-tool containers may also be available on BioContainers e.g. [`bwa` and `samtools`](https://biocontainers.pro/#/tools/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40). You can install and use the [`galaxy-tool-util`](https://anaconda.org/bioconda/galaxy-tool-util) package to search for both single- and multi-tool containers available in Conda, Docker and Singularity format. e.g. to search for Docker (hosted on Quay.io) and Singularity multi-tool containers with both `bowtie` and `samtools` installed you can use the following command:
|
||||
|
||||
|
@ -224,6 +435,32 @@ using a combination of `bwa` and `samtools` to output a BAM file instead of a SA
|
|||
> NB: Build information for all tools within a multi-tool container can be obtained in the `/usr/local/conda-meta/history` file within the container.
|
||||
|
||||
- It is also possible for a new multi-tool container to be built and added to BioContainers by submitting a pull request on their [`multi-package-containers`](https://github.com/BioContainers/multi-package-containers) repository.
|
||||
- Fork the [multi-package-containers repository](https://github.com/BioContainers/multi-package-containers)
|
||||
- Make a change to the `hash.tsv` file in the `combinations` directory see [here](https://github.com/aunderwo/multi-package-containers/blob/master/combinations/hash.tsv#L124) for an example where `pysam=0.16.0.1,biopython=1.78` was added.
|
||||
- Commit the code and then make a pull request to the original repo, for [example](https://github.com/BioContainers/multi-package-containers/pull/1661)
|
||||
- Once the PR has been accepted a container will get built and you can find it using a search tool in the `galaxy-tool-util conda` package
|
||||
|
||||
```console
|
||||
mulled-search --destination quay singularity conda --search pysam biopython | grep "mulled"
|
||||
quay mulled-v2-3a59640f3fe1ed11819984087d31d68600200c3f 185a25ca79923df85b58f42deb48f5ac4481e91f-0 docker pull quay.io/biocontainers/mulled-v2-3a59640f3fe1ed11819984087d31d68600200c3f:185a25ca79923df85b58f42deb48f5ac4481e91f-0
|
||||
singularity mulled-v2-3a59640f3fe1ed11819984087d31d68600200c3f 185a25ca79923df85b58f42deb48f5ac4481e91f-0 wget https://depot.galaxyproject.org/singularity/mulled-v2-3a59640f3fe1ed11819984087d31d68600200c3f:185a25ca79923df85b58f42deb48f5ac4481e91f-0
|
||||
```
|
||||
|
||||
- You can copy and paste the `mulled-*` path into the relevant Docker and Singularity lines in the Nextflow `process` definition of your module
|
||||
- To confirm that this is correct. Spin up a temporary Docker container
|
||||
|
||||
```console
|
||||
docker run --rm -it quay.io/biocontainers/mulled-v2-3a59640f3fe1ed11819984087d31d68600200c3f:185a25ca79923df85b58f42deb48f5ac4481e91f-0 /bin/sh
|
||||
```
|
||||
|
||||
And in the command prompt type
|
||||
|
||||
```console
|
||||
$ grep specs /usr/local/conda-meta/history
|
||||
# update specs: ['biopython=1.78', 'pysam=0.16.0.1']
|
||||
```
|
||||
|
||||
The packages should reflect those added to the multi-package-containers repo `hash.tsv` file
|
||||
|
||||
- If the software is not available on Bioconda a `Dockerfile` MUST be provided within the module directory. We will use GitHub Actions to auto-build the containers on the [GitHub Packages registry](https://github.com/features/packages).
|
||||
|
||||
|
@ -241,73 +478,6 @@ The `saveFiles` function can be found in the [`functions.nf`](software/fastqc/fu
|
|||
|
||||
We also use a standardised parameter called `params.publish_dir_mode` that can be used to alter the file publishing method (default: `copy`).
|
||||
|
||||
### Testing
|
||||
|
||||
In order to test that each module added to `nf-core/modules` is actually working and to be able to track any changes to results files between module updates we have set-up a number of Github Actions CI tests to run each module on a minimal test dataset using Docker, Singularity and Conda.
|
||||
|
||||
#### Test data
|
||||
|
||||
- All test data for `nf-core/modules` MUST be added to [`tests/data/`](tests/data/) and organised by filename extension.
|
||||
|
||||
- In order to keep the size of this repository as minimal as possible, pre-existing files from [`tests/data/`](tests/data/) MUST be reused if at all possible.
|
||||
|
||||
- Test files MUST be kept as tiny as possible.
|
||||
|
||||
#### Pytest workflow
|
||||
|
||||
- Every module MUST have a test workflow utilising test data added to the appropriate directory e.g. [`tests/software/fastqc/main.nf`](tests/software/fastqc/main.nf)
|
||||
|
||||
- Any outputs produced by the test workflow MUST be included in the [pytest-workflow](https://pytest-workflow.readthedocs.io/en/stable) for that tool e.g. [`tests/software/fastqc/test.yml`](tests/software/fastqc/test.yml). `md5sum` checks are the preferable choice of test to determine file changes, however, this may not be possible for all outputs generated by some tools e.g. if they include time stamps or command-related headers. Please do your best to avoid just checking for the file being present e.g. it may still be possible to check that the file contains the appropriate text snippets.
|
||||
|
||||
- A filter for the module must be created in [`.github/filters.yml`](.github/filters.yml). If the test workflow you have created invokes more than one tool please include any paths specific for those tool's too e.g. `bowtie build` is upstream of `bowtie align` and they have both been chained together to test the latter.
|
||||
|
||||
#### Running Tests Locally
|
||||
|
||||
1. Install [`nextflow`](https://nf-co.re/usage/installation)
|
||||
|
||||
2. Install any of [`Docker`](https://docs.docker.com/engine/installation/), [`Singularity`](https://www.sylabs.io/guides/3.0/user-guide/) or [`Conda`](https://conda.io/miniconda.html)
|
||||
|
||||
3. Install [`pytest-workflow`](https://pytest-workflow.readthedocs.io/en/stable/#installation)
|
||||
|
||||
4. Start running your own tests!
|
||||
|
||||
- Typical command with Docker:
|
||||
|
||||
```console
|
||||
cd /path/to/git/clone/of/nf-core/modules/
|
||||
PROFILE=docker pytest --tag bowtie --symlink --keep-workflow-wd
|
||||
```
|
||||
|
||||
- Typical command with Singularity:
|
||||
|
||||
```console
|
||||
cd /path/to/git/clone/of/nf-core/modules/
|
||||
TMPDIR=~ PROFILE=singularity pytest --tag bowtie --symlink --keep-workflow-wd
|
||||
```
|
||||
|
||||
- Typical command with Conda:
|
||||
|
||||
```console
|
||||
cd /path/to/git/clone/of/nf-core/modules/
|
||||
PROFILE=conda pytest --tag bowtie --symlink --keep-workflow-wd
|
||||
```
|
||||
|
||||
- See [docs on running pytest-workflow](https://pytest-workflow.readthedocs.io/en/stable/#running-pytest-workflow) for more info.
|
||||
|
||||
### Documentation
|
||||
|
||||
- A module MUST be documented in the [`meta.yml`](software/TOOL/SUBTOOL/meta.yml) file. It MUST document `params`, `input` and `output`. `input` and `output` MUST be a nested list.
|
||||
|
||||
We are aware that there is very little documentation, documenting the (`Documentation`)[#documentation] section. Writing more code and tests is so much cooooler! Please bear with us, we will get here eventually...
|
||||
|
||||
### Uploading to `nf-core/modules`
|
||||
|
||||
[Fork](https://help.github.com/articles/fork-a-repo/) the `nf-core/modules` repository to your own GitHub account. Within the local clone of your fork add the module file to the [`software/`](software) directory. Please try and keep PRs as atomic as possible to aid the reviewing process - ideally, one module addition/update per PR.
|
||||
|
||||
Commit and push these changes to your local clone on GitHub, and then [create a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/) on the `nf-core/modules` GitHub repo with the appropriate information.
|
||||
|
||||
We will be notified automatically when you have created your pull request, and providing that everything adheres to nf-core guidelines we will endeavour to approve your pull request as soon as possible.
|
||||
|
||||
## Terminology
|
||||
|
||||
The features offered by Nextflow DSL2 can be used in various ways depending on the granularity with which you would like to write pipelines. Please see the listing below for the hierarchy and associated terminology we have decided to use when referring to DSL2 components:
|
||||
|
@ -318,6 +488,25 @@ The features offered by Nextflow DSL2 can be used in various ways depending on t
|
|||
|
||||
- *Workflow*: What DSL1 users would consider an end-to-end pipeline. For example, from one or more inputs to a series of outputs. This can either be implemented using a large monolithic script as with DSL1, or by using a combination of DSL2 individual modules and sub-workflows.
|
||||
|
||||
## Nextflow edge releases
|
||||
|
||||
Stable releases will be becoming more infrequent as Nextflow shifts its development model to becoming more dynamic via the usage of plugins. This will allow functionality to be added as an extension to the core codebase with a release cycle that could potentially be independent to that of Nextflow itself. As a result of the reduction in stable releases, some pipelines may be required to use Nextflow `edge` releases in order to be able to exploit cutting "edge" features e.g. version 3.0 of the nf-core/rnaseq pipeline requires Nextflow `>=20.11.0-edge` in order to be able to directly download Singularity containers over `http` (see [nf-core/rnaseq#496](https://github.com/nf-core/rnaseq/issues/496)).
|
||||
|
||||
There are a number of ways you can install Nextflow `edge` releases, the main difference with stable releases being that you have to `export` the version you would like to install before issuing the appropriate installation/execution commands as highlighted below.
|
||||
|
||||
- If you would like to download and install a Nextflow `edge` release from scratch with minimal fuss:
|
||||
|
||||
```bash
|
||||
export NXF_VER="20.11.0-edge"
|
||||
wget -qO- get.nextflow.io | bash
|
||||
sudo mv nextflow /usr/local/bin/
|
||||
nextflow run nf-core/rnaseq -profile test,docker -r 3.0
|
||||
```
|
||||
|
||||
> Note if you don't have `sudo` privileges required for the last command above then you can move the `nextflow` binary to somewhere else and export that directory to `$PATH` instead. One way of doing that on Linux would be to add `export PATH=$PATH:/path/to/nextflow/binary/` to your `~/.bashrc` file so that it is available every time you login to your system.
|
||||
|
||||
- Manually download and install Nextflow from the available [assets](https://github.com/nextflow-io/nextflow/releases) on Github. See [Nextflow installation docs](https://www.nextflow.io/docs/latest/getstarted.html#installation).
|
||||
|
||||
## Help
|
||||
|
||||
For further information or help, don't hesitate to get in touch on [Slack `#modules` channel](https://nfcore.slack.com/channels/modules) (you can join with [this invite](https://nf-co.re/join/slack)).
|
||||
|
@ -334,16 +523,6 @@ If you use the module files in this repository for your analysis please you can
|
|||
|
||||
<!---
|
||||
|
||||
- The module MUST accept a parameter `params.publish_results` accepting at least
|
||||
- `"none"`, to publish no files at all,
|
||||
- a glob pattern which is initalized to a sensible default value.
|
||||
|
||||
### Configuration and parameters
|
||||
|
||||
The module files hosted in this repository define a set of processes for software tools such as `fastqc`, `trimgalore`, `bwa` etc. This allows you to share and add common functionality across multiple pipelines in a modular fashion.
|
||||
|
||||
> The definition and standards for module files are still under discussion amongst the community but hopefully, a description should be added here soon!
|
||||
|
||||
### Offline usage
|
||||
|
||||
If you want to use an existing module file available in `nf-core/modules`, and you're running on a system that has no internet connection, you'll need to download the repository (e.g. `git clone https://github.com/nf-core/modules.git`) and place it in a location that is visible to the file system on which you are running the pipeline. Then run the pipeline by creating a custom config file called e.g. `custom_module.conf` containing the following information:
|
||||
|
@ -361,4 +540,6 @@ nextflow run /path/to/pipeline/ -c /path/to/custom_module.conf
|
|||
> Note that the nf-core/tools helper package has a `download` command to download all required pipeline
|
||||
> files + singularity containers + institutional configs + modules in one go for you, to make this process easier.
|
||||
|
||||
# New test data created for the module- sequenzautils/bam2seqz
|
||||
The new test data is an output from another module- sequenzautils/bcwiggle- (which uses sarscov2 genome fasta file as an input).
|
||||
-->
|
||||
|
|
|
@ -1,20 +0,0 @@
|
|||
process shovill {
|
||||
|
||||
tag "$shovill"
|
||||
|
||||
publishDir "${params.outdir}", pattern: '*.fasta', mode: 'copy'
|
||||
|
||||
container "quay.io/biocontainers/shovill:1.0.9--0"
|
||||
|
||||
input:
|
||||
tuple val(sample_id), path(forward), path(reverse)
|
||||
|
||||
output:
|
||||
path "${sample_id}.fasta"
|
||||
|
||||
script:
|
||||
"""
|
||||
shovill --R1 ${forward} --R2 ${reverse} --outdir shovill_out
|
||||
mv shovill_out/contigs.fa ${sample_id}.fasta
|
||||
"""
|
||||
}
|
|
@ -1,30 +0,0 @@
|
|||
name: Shovill
|
||||
description: Create a bacterial assembly from paired fastq using shovill
|
||||
keywords:
|
||||
- Genome Assembly
|
||||
- Bacterial Isolates
|
||||
tools:
|
||||
- fastqc:
|
||||
description: |
|
||||
Shovill assembles bacterial isolate genomes from Illumina
|
||||
paired-end reads. Shovill uses the SPAdes genome assembler,
|
||||
providing pre and post-processing to the SPAdes assembly.
|
||||
It also supports SKESA, Velvet and Megahit.
|
||||
homepage: https://github.com/tseemann/shovill
|
||||
documentation: https://github.com/tseemann/shovill/blob/master/README.md
|
||||
input:
|
||||
-
|
||||
- sample_id:
|
||||
type: string
|
||||
description: Sample identifier
|
||||
- reads:
|
||||
type: file
|
||||
description: pair of fastq files
|
||||
output:
|
||||
-
|
||||
- assembly:
|
||||
type: file
|
||||
description: fasta file
|
||||
pattern: ${sample_id}.fasta
|
||||
authors:
|
||||
- "@annacprice"
|
|
@ -1,17 +0,0 @@
|
|||
#!/usr/bin/env nextflow
|
||||
|
||||
nextflow.preview.dsl = 2
|
||||
|
||||
// import shovill
|
||||
include {shovill} from '../main.nf' params(params)
|
||||
|
||||
// define input channel
|
||||
readsPath = '../../../test-datasets/tools/shovill/input/SRR3609257_{1,2}.fastq.gz'
|
||||
Channel
|
||||
.fromFilePairs( "${readsPath}", flat: true )
|
||||
.set{ ch_reads }
|
||||
|
||||
// main workflow
|
||||
workflow {
|
||||
shovill(ch_reads)
|
||||
}
|
|
@ -1,5 +0,0 @@
|
|||
// docker
|
||||
docker.enabled = true
|
||||
|
||||
// output directory
|
||||
params.outdir = './results'
|
60
software/adapterremoval/functions.nf
Normal file
60
software/adapterremoval/functions.nf
Normal file
|
@ -0,0 +1,60 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.args3 = args.args3 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
80
software/adapterremoval/main.nf
Normal file
80
software/adapterremoval/main.nf
Normal file
|
@ -0,0 +1,80 @@
|
|||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process ADAPTERREMOVAL {
|
||||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::adapterremoval=2.3.2" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/adapterremoval:2.3.2--hb7ba0dd_0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/adapterremoval:2.3.2--hb7ba0dd_0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(reads)
|
||||
|
||||
output:
|
||||
tuple val(meta), path('*.fastq.gz'), emit: reads
|
||||
tuple val(meta), path('*.log') , emit: log
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}${options.suffix}" : "${meta.id}"
|
||||
|
||||
if (meta.single_end) {
|
||||
"""
|
||||
AdapterRemoval \\
|
||||
--file1 $reads \\
|
||||
$options.args \\
|
||||
--basename $prefix \\
|
||||
--threads $task.cpus \\
|
||||
--settings ${prefix}.log \\
|
||||
--output1 ${prefix}.trimmed.fastq.gz \\
|
||||
--seed 42 \\
|
||||
--gzip \\
|
||||
|
||||
AdapterRemoval --version 2>&1 | sed -e "s/AdapterRemoval ver. //g" > ${software}.version.txt
|
||||
"""
|
||||
} else if (!meta.single_end && !meta.collapse) {
|
||||
"""
|
||||
AdapterRemoval \\
|
||||
--file1 ${reads[0]} \\
|
||||
--file2 ${reads[0]} \\
|
||||
$options.args \\
|
||||
--basename $prefix \\
|
||||
--threads $task.cpus \\
|
||||
--settings ${prefix}.log \\
|
||||
--output1 ${prefix}.pair1.trimmed.fastq.gz \\
|
||||
--output2 ${prefix}.pair2.trimmed.fastq.gz \\
|
||||
--seed 42 \\
|
||||
--gzip \\
|
||||
|
||||
AdapterRemoval --version 2>&1 | sed -e "s/AdapterRemoval ver. //g" > ${software}.version.txt
|
||||
"""
|
||||
} else {
|
||||
"""
|
||||
AdapterRemoval \\
|
||||
--file1 ${reads[0]} \\
|
||||
--file2 ${reads[0]} \\
|
||||
--collapse \\
|
||||
$options.args \\
|
||||
--basename $prefix \\
|
||||
--threads $task.cpus \\
|
||||
--settings ${prefix}.log \\
|
||||
--seed 42 \\
|
||||
--gzip \\
|
||||
|
||||
cat *.collapsed.gz *.collapsed.truncated.gz > ${prefix}.merged.fastq.gz
|
||||
AdapterRemoval --version 2>&1 | sed -e "s/AdapterRemoval ver. //g" > ${software}.version.txt
|
||||
"""
|
||||
}
|
||||
|
||||
}
|
50
software/adapterremoval/meta.yml
Normal file
50
software/adapterremoval/meta.yml
Normal file
|
@ -0,0 +1,50 @@
|
|||
name: adapterremoval
|
||||
description: Trim sequencing adapters and collapse overlapping reads
|
||||
keywords:
|
||||
- trimming
|
||||
- adapters
|
||||
- merging
|
||||
- fastq
|
||||
tools:
|
||||
- adapterremoval:
|
||||
description: The AdapterRemoval v2 tool for merging and clipping reads.
|
||||
homepage: https://github.com/MikkelSchubert/adapterremoval
|
||||
documentation: https://adapterremoval.readthedocs.io
|
||||
licence: ['GPL v3']
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false, collapse:false ]
|
||||
- reads:
|
||||
type: file
|
||||
description: |
|
||||
List of input FastQ files of size 1 and 2 for single-end and paired-end data,
|
||||
respectively.
|
||||
pattern: "*.{fq,fastq,fg.gz,fastq.gz}"
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- reads:
|
||||
type: file
|
||||
description: |
|
||||
List of input adapter trimmed FastQ files of size 1 or 2 for
|
||||
single-end or collapsed data and paired-end data, respectively.
|
||||
pattern: "*.{fastq.gz}"
|
||||
- log:
|
||||
type: file
|
||||
description: AdapterRemoval log file
|
||||
pattern: "*.log"
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
|
||||
authors:
|
||||
- "@maxibor"
|
60
software/allelecounter/functions.nf
Normal file
60
software/allelecounter/functions.nf
Normal file
|
@ -0,0 +1,60 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.args3 = args.args3 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
41
software/allelecounter/main.nf
Normal file
41
software/allelecounter/main.nf
Normal file
|
@ -0,0 +1,41 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process ALLELECOUNTER {
|
||||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::cancerit-allelecount=4.2.1" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/cancerit-allelecount:4.2.1--h3ecb661_0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/cancerit-allelecount:4.2.1--h3ecb661_0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam), path(bai)
|
||||
path loci
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.alleleCount"), emit: allelecount
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}${options.suffix}" : "${meta.id}"
|
||||
"""
|
||||
alleleCounter \\
|
||||
$options.args \\
|
||||
-l $loci \\
|
||||
-b $bam \\
|
||||
-o ${prefix}.alleleCount
|
||||
|
||||
alleleCounter --version > ${software}.version.txt
|
||||
"""
|
||||
}
|
52
software/allelecounter/meta.yml
Normal file
52
software/allelecounter/meta.yml
Normal file
|
@ -0,0 +1,52 @@
|
|||
name: allelecounter
|
||||
|
||||
description: Generates a count of coverage of alleles
|
||||
keywords:
|
||||
- allele
|
||||
- count
|
||||
tools:
|
||||
- allelecounter:
|
||||
description: Takes a file of locations and a [cr|b]am file and generates a count of coverage of each allele at that location (given any filter settings)
|
||||
homepage: https://github.com/cancerit/alleleCount
|
||||
documentation: https://github.com/cancerit/alleleCount
|
||||
tool_dev_url: https://github.com/cancerit/alleleCount
|
||||
doi: ""
|
||||
licence: A-GPL 3.0
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: BAM/CRAM/SAM file
|
||||
pattern: "*.{bam,cram,sam}"
|
||||
- bai:
|
||||
type: file
|
||||
description: BAM/CRAM/SAM index file
|
||||
pattern: "*.{bai,crai,sai}"
|
||||
- loci:
|
||||
type: file
|
||||
description: loci file <CHR><tab><POS1>
|
||||
pattern: "*.{tsv}"
|
||||
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
- alleleCount:
|
||||
type: file
|
||||
description: Allele count file
|
||||
pattern: "*.{alleleCount}"
|
||||
|
||||
authors:
|
||||
- "@fullama"
|
|
@ -11,27 +11,6 @@ tools:
|
|||
Bandage - a Bioinformatics Application for Navigating De novo Assembly Graphs Easily
|
||||
homepage: https://github.com/rrwick/Bandage
|
||||
documentation: https://github.com/rrwick/Bandage
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -11,27 +11,6 @@ tools:
|
|||
homepage: http://samtools.github.io/bcftools/bcftools.html
|
||||
documentation: http://www.htslib.org/doc/bcftools.html
|
||||
doi: 10.1093/bioinformatics/btp352
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -11,27 +11,6 @@ tools:
|
|||
homepage: http://samtools.github.io/bcftools/bcftools.html
|
||||
documentation: http://www.htslib.org/doc/bcftools.html
|
||||
doi: 10.1093/bioinformatics/btp352
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -13,27 +13,6 @@ tools:
|
|||
homepage: http://samtools.github.io/bcftools/bcftools.html
|
||||
documentation: http://www.htslib.org/doc/bcftools.html
|
||||
doi: 10.1093/bioinformatics/btp352
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -11,27 +11,6 @@ tools:
|
|||
homepage: http://samtools.github.io/bcftools/bcftools.html
|
||||
documentation: http://www.htslib.org/doc/bcftools.html
|
||||
doi: 10.1093/bioinformatics/btp352
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -11,27 +11,6 @@ tools:
|
|||
homepage: http://samtools.github.io/bcftools/bcftools.html
|
||||
documentation: http://www.htslib.org/doc/bcftools.html
|
||||
doi: 10.1093/bioinformatics/btp352
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -12,27 +12,6 @@ tools:
|
|||
homepage: http://samtools.github.io/bcftools/bcftools.html
|
||||
documentation: http://www.htslib.org/doc/bcftools.html
|
||||
doi: 10.1093/bioinformatics/btp352
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -8,27 +8,6 @@ tools:
|
|||
description: |
|
||||
A set of tools for genomic analysis tasks, specifically enabling genome arithmetic (merge, count, complement) on various file types.
|
||||
documentation: https://bedtools.readthedocs.io/en/latest/content/tools/complement.html
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -9,27 +9,6 @@ tools:
|
|||
description: |
|
||||
A set of tools for genomic analysis tasks, specifically enabling genome arithmetic (merge, count, complement) on various file types.
|
||||
documentation: https://bedtools.readthedocs.io/en/latest/content/tools/genomecov.html
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -9,27 +9,6 @@ tools:
|
|||
description: |
|
||||
A set of tools for genomic analysis tasks, specifically enabling genome arithmetic (merge, count, complement) on various file types.
|
||||
documentation: https://bedtools.readthedocs.io/en/latest/content/tools/intersect.html
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- bed:
|
||||
type: file
|
||||
|
|
|
@ -8,27 +8,6 @@ tools:
|
|||
description: |
|
||||
A set of tools for genomic analysis tasks, specifically enabling genome arithmetic (merge, count, complement) on various file types.
|
||||
documentation: https://bedtools.readthedocs.io/en/latest/content/tools/intersect.html
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -9,27 +9,6 @@ tools:
|
|||
description: |
|
||||
A set of tools for genomic analysis tasks, specifically enabling genome arithmetic (merge, count, complement) on various file types.
|
||||
documentation: https://bedtools.readthedocs.io/en/latest/content/tools/intersect.html
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -8,27 +8,6 @@ tools:
|
|||
description: |
|
||||
A set of tools for genomic analysis tasks, specifically enabling genome arithmetic (merge, count, complement) on various file types.
|
||||
documentation: https://bedtools.readthedocs.io/en/latest/content/tools/merge.html
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -8,27 +8,6 @@ tools:
|
|||
description: |
|
||||
A set of tools for genomic analysis tasks, specifically enabling genome arithmetic (merge, count, complement) on various file types.
|
||||
documentation: https://bedtools.readthedocs.io/en/latest/content/tools/slop.html
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -8,27 +8,6 @@ tools:
|
|||
description: |
|
||||
A set of tools for genomic analysis tasks, specifically enabling genome arithmetic (merge, count, complement) on various file types.
|
||||
documentation: https://bedtools.readthedocs.io/en/latest/content/tools/sort.html
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
44
software/bismark/align/main.nf
Normal file
44
software/bismark/align/main.nf
Normal file
|
@ -0,0 +1,44 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process BISMARK_ALIGN {
|
||||
tag "$meta.id"
|
||||
label 'process_high'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bismark=0.23.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/bismark:0.23.0--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/bismark:0.23.0--0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(reads)
|
||||
path index
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*bam") , emit: bam
|
||||
tuple val(meta), path("*report.txt"), emit: report
|
||||
tuple val(meta), path("*fq.gz") , optional:true, emit: unmapped
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}${options.suffix}" : "${meta.id}"
|
||||
def fastq = meta.single_end ? reads : "-1 ${reads[0]} -2 ${reads[1]}"
|
||||
"""
|
||||
bismark \\
|
||||
$fastq \\
|
||||
$options.args \\
|
||||
--genome $index \\
|
||||
--bam
|
||||
|
||||
echo \$(bismark -v 2>&1) | sed 's/^.*Bismark Version: v//; s/Copyright.*\$//' > ${software}.version.txt
|
||||
"""
|
||||
}
|
58
software/bismark/align/meta.yml
Normal file
58
software/bismark/align/meta.yml
Normal file
|
@ -0,0 +1,58 @@
|
|||
name: bismark_align
|
||||
description: Performs alignment of BS-Seq reads using bismark
|
||||
keywords:
|
||||
- bismark
|
||||
- 3-letter genome
|
||||
- map
|
||||
- methylation
|
||||
- 5mC
|
||||
- methylseq
|
||||
- bisulphite
|
||||
- bam
|
||||
tools:
|
||||
- bismark:
|
||||
description: |
|
||||
Bismark is a tool to map bisulfite treated sequencing reads
|
||||
and perform methylation calling in a quick and easy-to-use fashion.
|
||||
homepage: https://github.com/FelixKrueger/Bismark
|
||||
documentation: https://github.com/FelixKrueger/Bismark/tree/master/Docs
|
||||
doi: 10.1093/bioinformatics/btr167
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- reads:
|
||||
type: file
|
||||
description: |
|
||||
List of input FastQ files of size 1 and 2 for single-end and paired-end data,
|
||||
respectively.
|
||||
- index:
|
||||
type: dir
|
||||
description: Bismark genome index directory
|
||||
pattern: "BismarkIndex"
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: Output BAM file containing read alignments
|
||||
pattern: "*.{bam}"
|
||||
- unmapped:
|
||||
type: file
|
||||
description: Output FastQ file(s) containing unmapped reads
|
||||
pattern: "*.{fq.gz}"
|
||||
- report:
|
||||
type: file
|
||||
description: Bismark alignment reports
|
||||
pattern: "*{report.txt}"
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
authors:
|
||||
- "@phue"
|
|
@ -11,7 +11,7 @@ process BISMARK_DEDUPLICATE {
|
|||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bismark==0.23.0" : null)
|
||||
conda (params.enable_conda ? "bioconda::bismark=0.23.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/bismark:0.23.0--0"
|
||||
} else {
|
||||
|
|
|
@ -19,27 +19,6 @@ tools:
|
|||
homepage: https://github.com/FelixKrueger/Bismark
|
||||
documentation: https://github.com/FelixKrueger/Bismark/tree/master/Docs
|
||||
doi: 10.1093/bioinformatics/btr167
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
59
software/bismark/genomepreparation/functions.nf
Normal file
59
software/bismark/genomepreparation/functions.nf
Normal file
|
@ -0,0 +1,59 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -4,14 +4,14 @@ include { initOptions; saveFiles; getSoftwareName } from './functions'
|
|||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process BISMARK_GENOME_PREPARATION {
|
||||
process BISMARK_GENOMEPREPARATION {
|
||||
tag "$fasta"
|
||||
label 'process_high'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:'') }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bismark==0.23.0" : null)
|
||||
conda (params.enable_conda ? "bioconda::bismark=0.23.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/bismark:0.23.0--0"
|
||||
} else {
|
|
@ -1,4 +1,4 @@
|
|||
name: bismark_genome_preparation
|
||||
name: bismark_genomepreparation
|
||||
description: |
|
||||
Converts a specified reference genome into two different bisulfite
|
||||
converted versions and indexes them for alignments.
|
||||
|
@ -19,27 +19,6 @@ tools:
|
|||
homepage: https://github.com/FelixKrueger/Bismark
|
||||
documentation: https://github.com/FelixKrueger/Bismark/tree/master/Docs
|
||||
doi: 10.1093/bioinformatics/btr167
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- fasta:
|
||||
type: file
|
59
software/bismark/methylationextractor/functions.nf
Normal file
59
software/bismark/methylationextractor/functions.nf
Normal file
|
@ -0,0 +1,59 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
48
software/bismark/methylationextractor/main.nf
Normal file
48
software/bismark/methylationextractor/main.nf
Normal file
|
@ -0,0 +1,48 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process BISMARK_METHYLATIONEXTRACTOR {
|
||||
tag "$meta.id"
|
||||
label 'process_high'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bismark=0.23.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/bismark:0.23.0--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/bismark:0.23.0--0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
path index
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.bedGraph.gz") , emit: bedgraph
|
||||
tuple val(meta), path("*.txt.gz") , emit: methylation_calls
|
||||
tuple val(meta), path("*.cov.gz") , emit: coverage
|
||||
tuple val(meta), path("*_splitting_report.txt"), emit: report
|
||||
tuple val(meta), path("*.M-bias.txt") , emit: mbias
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def seqtype = meta.single_end ? '-s' : '-p'
|
||||
def software = getSoftwareName(task.process)
|
||||
"""
|
||||
bismark_methylation_extractor \\
|
||||
--bedGraph \\
|
||||
--counts \\
|
||||
--gzip \\
|
||||
--report \\
|
||||
$seqtype \\
|
||||
$options.args \\
|
||||
$bam
|
||||
|
||||
echo \$(bismark -v 2>&1) | sed 's/^.*Bismark Version: v//; s/Copyright.*\$//' > ${software}.version.txt
|
||||
"""
|
||||
}
|
66
software/bismark/methylationextractor/meta.yml
Normal file
66
software/bismark/methylationextractor/meta.yml
Normal file
|
@ -0,0 +1,66 @@
|
|||
name: bismark_methylationextractor
|
||||
description: Extracts methylation information for individual cytosines from alignments.
|
||||
keywords:
|
||||
- bismark
|
||||
- consensus
|
||||
- map
|
||||
- methylation
|
||||
- 5mC
|
||||
- methylseq
|
||||
- bisulphite
|
||||
- bam
|
||||
- bedGraph
|
||||
tools:
|
||||
- bismark:
|
||||
description: |
|
||||
Bismark is a tool to map bisulfite treated sequencing reads
|
||||
and perform methylation calling in a quick and easy-to-use fashion.
|
||||
homepage: https://github.com/FelixKrueger/Bismark
|
||||
documentation: https://github.com/FelixKrueger/Bismark/tree/master/Docs
|
||||
doi: 10.1093/bioinformatics/btr167
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: BAM file containing read alignments
|
||||
pattern: "*.{bam}"
|
||||
- index:
|
||||
type: dir
|
||||
description: Bismark genome index directory
|
||||
pattern: "BismarkIndex"
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bedgraph:
|
||||
type: file
|
||||
description: Bismark output file containing coverage and methylation metrics
|
||||
pattern: "*.{bedGraph.gz}"
|
||||
- methylation_calls:
|
||||
type: file
|
||||
description: Bismark output file containing strand-specific methylation calls
|
||||
pattern: "*.{txt.gz}"
|
||||
- coverage:
|
||||
type: file
|
||||
description: Bismark output file containing coverage metrics
|
||||
pattern: "*.{cov.gz}"
|
||||
- report:
|
||||
type: file
|
||||
description: Bismark splitting reports
|
||||
pattern: "*_{splitting_report.txt}"
|
||||
- mbias:
|
||||
type: file
|
||||
description: Text file containing methylation bias information
|
||||
pattern: "*.{M-bias.txt}"
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
authors:
|
||||
- "@phue"
|
59
software/bismark/report/functions.nf
Normal file
59
software/bismark/report/functions.nf
Normal file
|
@ -0,0 +1,59 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
39
software/bismark/report/main.nf
Normal file
39
software/bismark/report/main.nf
Normal file
|
@ -0,0 +1,39 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process BISMARK_REPORT {
|
||||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bismark=0.23.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/bismark:0.23.0--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/bismark:0.23.0--0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(align_report), path(dedup_report), path(splitting_report), path(mbias)
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*{html,txt}"), emit: report
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
"""
|
||||
bismark2report \\
|
||||
--alignment_report $align_report \\
|
||||
--dedup_report $dedup_report \\
|
||||
--splitting_report $splitting_report \\
|
||||
--mbias_report $mbias
|
||||
|
||||
echo \$(bismark -v 2>&1) | sed 's/^.*Bismark Version: v//; s/Copyright.*\$//' > ${software}.version.txt
|
||||
"""
|
||||
}
|
59
software/bismark/report/meta.yml
Normal file
59
software/bismark/report/meta.yml
Normal file
|
@ -0,0 +1,59 @@
|
|||
name: bismark_report
|
||||
description: Collects bismark alignment reports
|
||||
keywords:
|
||||
- bismark
|
||||
- qc
|
||||
- methylation
|
||||
- 5mC
|
||||
- methylseq
|
||||
- bisulphite
|
||||
- report
|
||||
tools:
|
||||
- bismark:
|
||||
description: |
|
||||
Bismark is a tool to map bisulfite treated sequencing reads
|
||||
and perform methylation calling in a quick and easy-to-use fashion.
|
||||
homepage: https://github.com/FelixKrueger/Bismark
|
||||
documentation: https://github.com/FelixKrueger/Bismark/tree/master/Docs
|
||||
doi: 10.1093/bioinformatics/btr167
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- align_report:
|
||||
type: file
|
||||
description: Bismark alignment reports
|
||||
pattern: "*{report.txt}"
|
||||
- splitting_report:
|
||||
type: file
|
||||
description: Bismark splitting reports
|
||||
pattern: "*{splitting_report.txt}"
|
||||
- dedup_report:
|
||||
type: file
|
||||
description: Bismark deduplication reports
|
||||
pattern: "*.{deduplication_report.txt}"
|
||||
- mbias:
|
||||
type: file
|
||||
description: Text file containing methylation bias information
|
||||
pattern: "*.{txt}"
|
||||
- fasta:
|
||||
type: file
|
||||
description: Input genome fasta file
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- report:
|
||||
type: file
|
||||
description: Bismark reports
|
||||
pattern: "*.{html,txt}"
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
authors:
|
||||
- "@phue"
|
59
software/bismark/summary/functions.nf
Normal file
59
software/bismark/summary/functions.nf
Normal file
|
@ -0,0 +1,59 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
38
software/bismark/summary/main.nf
Normal file
38
software/bismark/summary/main.nf
Normal file
|
@ -0,0 +1,38 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process BISMARK_SUMMARY {
|
||||
label 'process_low'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:'') }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bismark=0.23.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/bismark:0.23.0--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/bismark:0.23.0--0"
|
||||
}
|
||||
|
||||
input:
|
||||
path(bam)
|
||||
path(align_report)
|
||||
path(dedup_report)
|
||||
path(splitting_report)
|
||||
path(mbias)
|
||||
|
||||
output:
|
||||
path("*{html,txt}") , emit: summary
|
||||
path "*.version.txt", emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
"""
|
||||
bismark2summary
|
||||
|
||||
echo \$(bismark -v 2>&1) | sed 's/^.*Bismark Version: v//; s/Copyright.*\$//' > ${software}.version.txt
|
||||
"""
|
||||
}
|
53
software/bismark/summary/meta.yml
Normal file
53
software/bismark/summary/meta.yml
Normal file
|
@ -0,0 +1,53 @@
|
|||
name: bismark_summary
|
||||
description: |
|
||||
Uses Bismark report files of several samples in a run folder
|
||||
to generate a graphical summary HTML report.
|
||||
keywords:
|
||||
- bismark
|
||||
- qc
|
||||
- methylation
|
||||
- 5mC
|
||||
- methylseq
|
||||
- bisulphite
|
||||
- report
|
||||
- summary
|
||||
tools:
|
||||
- bismark:
|
||||
description: |
|
||||
Bismark is a tool to map bisulfite treated sequencing reads
|
||||
and perform methylation calling in a quick and easy-to-use fashion.
|
||||
homepage: https://github.com/FelixKrueger/Bismark
|
||||
documentation: https://github.com/FelixKrueger/Bismark/tree/master/Docs
|
||||
doi: 10.1093/bioinformatics/btr167
|
||||
input:
|
||||
- bam:
|
||||
type: file
|
||||
description: Bismark alignment
|
||||
pattern: "*.{bam}"
|
||||
- align_report:
|
||||
type: file
|
||||
description: Bismark alignment reports
|
||||
pattern: "*{report.txt}"
|
||||
- dedup_report:
|
||||
type: file
|
||||
description: Bismark deduplication reports
|
||||
pattern: "*.{deduplication_report.txt}"
|
||||
- splitting_report:
|
||||
type: file
|
||||
description: Bismark splitting reports
|
||||
pattern: "*{splitting_report.txt}"
|
||||
- mbias:
|
||||
type: file
|
||||
description: Text file containing methylation bias information
|
||||
pattern: "*.{txt}"
|
||||
output:
|
||||
- summary:
|
||||
type: file
|
||||
description: Bismark summary
|
||||
pattern: "*.{html,txt}"
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
authors:
|
||||
- "@phue"
|
|
@ -12,27 +12,6 @@ tools:
|
|||
homepage: https://blast.ncbi.nlm.nih.gov/Blast.cgi
|
||||
documentation: https://blast.ncbi.nlm.nih.gov/Blast.cgi?CMD=Web&PAGE_TYPE=Blastdocs
|
||||
doi: 10.1016/S0022-2836(05)80360-2
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -11,27 +11,6 @@ tools:
|
|||
homepage: https://blast.ncbi.nlm.nih.gov/Blast.cgi
|
||||
documentation: https://blast.ncbi.nlm.nih.gov/Blast.cgi?CMD=Web&PAGE_TYPE=Blastdocs
|
||||
doi: 10.1016/S0022-2836(05)80360-2
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- fasta:
|
||||
type: file
|
||||
|
|
|
@ -13,30 +13,6 @@ tools:
|
|||
homepage: http://bowtie-bio.sourceforge.net/index.shtml
|
||||
documentation: http://bowtie-bio.sourceforge.net/manual.shtml
|
||||
arxiv: arXiv:1303.3997
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
- save_unaligned:
|
||||
type: boolean
|
||||
description: Save unaligned reads
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -9,7 +9,7 @@ process BOWTIE_BUILD {
|
|||
label 'process_high'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:'') }
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:'index', publish_id:'') }
|
||||
|
||||
conda (params.enable_conda ? 'bioconda::bowtie=1.3.0' : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
|
|
|
@ -13,27 +13,6 @@ tools:
|
|||
homepage: http://bowtie-bio.sourceforge.net/index.shtml
|
||||
documentation: http://bowtie-bio.sourceforge.net/manual.shtml
|
||||
arxiv: arXiv:1303.3997
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- fasta:
|
||||
type: file
|
||||
|
|
|
@ -13,30 +13,6 @@ tools:
|
|||
homepage: http://bowtie-bio.sourceforge.net/bowtie2/index.shtml
|
||||
documentation: http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml
|
||||
doi: 10.1038/nmeth.1923
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
- save_unaligned:
|
||||
type: boolean
|
||||
description: Save unaligned reads
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -9,7 +9,7 @@ process BOWTIE2_BUILD {
|
|||
label 'process_high'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:'') }
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:'index', publish_id:'') }
|
||||
|
||||
conda (params.enable_conda ? 'bioconda::bowtie2=2.4.2' : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
|
|
|
@ -14,27 +14,6 @@ tools:
|
|||
homepage: http://bowtie-bio.sourceforge.net/bowtie2/index.shtml
|
||||
documentation: http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml
|
||||
doi: 10.1038/nmeth.1923
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- fasta:
|
||||
type: file
|
||||
|
|
|
@ -9,7 +9,7 @@ process BWA_INDEX {
|
|||
label 'process_high'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:'') }
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:'index', publish_id:'') }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bwa=0.7.17" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
|
|
|
@ -13,27 +13,6 @@ tools:
|
|||
homepage: http://bio-bwa.sourceforge.net/
|
||||
documentation: http://www.htslib.org/doc/samtools.html
|
||||
arxiv: arXiv:1303.3997
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- fasta:
|
||||
type: file
|
||||
|
|
|
@ -11,11 +11,11 @@ process BWA_MEM {
|
|||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bwa=0.7.17 bioconda::samtools=1.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::bwa=0.7.17 bioconda::samtools=1.12" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:eabfac3657eda5818bae4090db989e3d41b01542-0"
|
||||
container "https://depot.galaxyproject.org/singularity/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:66ed1b38d280722529bb8a0167b0cf02f8a0b488-0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:eabfac3657eda5818bae4090db989e3d41b01542-0"
|
||||
container "quay.io/biocontainers/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:66ed1b38d280722529bb8a0167b0cf02f8a0b488-0"
|
||||
}
|
||||
|
||||
input:
|
||||
|
|
|
@ -16,27 +16,6 @@ tools:
|
|||
homepage: http://bio-bwa.sourceforge.net/
|
||||
documentation: http://www.htslib.org/doc/samtools.html
|
||||
arxiv: arXiv:1303.3997
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -9,13 +9,13 @@ process BWAMEM2_INDEX {
|
|||
label 'process_high'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:'') }
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:'index', publish_id:'') }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bwa-mem2=2.1" : null)
|
||||
conda (params.enable_conda ? "bioconda::bwa-mem2=2.2.1" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/bwa-mem2:2.1--he513fc3_0"
|
||||
container "https://depot.galaxyproject.org/singularity/bwa-mem2:2.2.1--he513fc3_0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/bwa-mem2:2.1--he513fc3_0"
|
||||
container "quay.io/biocontainers/bwa-mem2:2.2.1--he513fc3_0"
|
||||
}
|
||||
|
||||
input:
|
||||
|
|
|
@ -12,27 +12,6 @@ tools:
|
|||
a large reference genome, such as the human genome.
|
||||
homepage: https://github.com/bwa-mem2/bwa-mem2
|
||||
documentation: https://github.com/bwa-mem2/bwa-mem2#usage
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- fasta:
|
||||
type: file
|
||||
|
|
|
@ -11,11 +11,11 @@ process BWAMEM2_MEM {
|
|||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bwa-mem2=2.1 bioconda::samtools=1.11" : null)
|
||||
conda (params.enable_conda ? "bioconda::bwa-mem2=2.2.1 bioconda::samtools=1.12" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/mulled-v2-e5d375990341c5aef3c9aff74f96f66f65375ef6:e6f0d20c9d78572ddbbf00d8767ee6ff865edd4e-0"
|
||||
container "https://depot.galaxyproject.org/singularity/mulled-v2-e5d375990341c5aef3c9aff74f96f66f65375ef6:cf603b12db30ec91daa04ba45a8ee0f35bbcd1e2-0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/mulled-v2-e5d375990341c5aef3c9aff74f96f66f65375ef6:e6f0d20c9d78572ddbbf00d8767ee6ff865edd4e-0"
|
||||
container "quay.io/biocontainers/mulled-v2-e5d375990341c5aef3c9aff74f96f66f65375ef6:cf603b12db30ec91daa04ba45a8ee0f35bbcd1e2-0"
|
||||
}
|
||||
|
||||
input:
|
||||
|
|
|
@ -16,27 +16,6 @@ tools:
|
|||
homepage: http://bio-bwa.sourceforge.net/
|
||||
documentation: http://www.htslib.org/doc/samtools.html
|
||||
arxiv: arXiv:1303.3997
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -19,27 +19,6 @@ tools:
|
|||
homepage: https://github.com/brentp/bwa-meth
|
||||
documentation: https://github.com/brentp/bwa-meth
|
||||
arxiv: arXiv:1401.1129
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -9,7 +9,7 @@ process BWAMETH_INDEX {
|
|||
label 'process_high'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:'') }
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:'index', publish_id:'') }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bwameth=0.2.2" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
|
|
|
@ -15,27 +15,6 @@ tools:
|
|||
homepage: https://github.com/brentp/bwa-meth
|
||||
documentation: https://github.com/brentp/bwa-meth
|
||||
arxiv: arXiv:1401.1129
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- fasta:
|
||||
type: file
|
||||
|
|
|
@ -6,6 +6,7 @@ options = initOptions(params.options)
|
|||
|
||||
process CAT_FASTQ {
|
||||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:'merged_fastq', publish_id:meta.id) }
|
||||
|
|
|
@ -8,27 +8,6 @@ tools:
|
|||
description: |
|
||||
The cat utility reads files sequentially, writing them to the standard output.
|
||||
documentation: https://www.gnu.org/software/coreutils/manual/html_node/cat-invocation.html
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
59
software/cnvkit/functions.nf
Executable file
59
software/cnvkit/functions.nf
Executable file
|
@ -0,0 +1,59 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
46
software/cnvkit/main.nf
Executable file
46
software/cnvkit/main.nf
Executable file
|
@ -0,0 +1,46 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
def options = initOptions(params.options)
|
||||
|
||||
process CNVKIT {
|
||||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::cnvkit=0.9.8" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/cnvkit:0.9.8--py_0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/cnvkit:0.9.8--py_0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(tumourbam), path(normalbam)
|
||||
path fasta
|
||||
path targetfile
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.bed"), emit: bed
|
||||
tuple val(meta), path("*.cnn"), emit: cnn
|
||||
tuple val(meta), path("*.cnr"), emit: cnr
|
||||
tuple val(meta), path("*.cns"), emit: cns
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}.${options.suffix}" : "${meta.id}"
|
||||
"""
|
||||
cnvkit.py batch \\
|
||||
$tumourbam \\
|
||||
--normal $normalbam\\
|
||||
--fasta $fasta \\
|
||||
--targets $targetfile \\
|
||||
$options.args
|
||||
|
||||
cnvkit.py version | sed -e "s/cnvkit v//g" > ${software}.version.txt
|
||||
"""
|
||||
}
|
87
software/cnvkit/meta.yml
Executable file
87
software/cnvkit/meta.yml
Executable file
|
@ -0,0 +1,87 @@
|
|||
name: cnvkit
|
||||
description: Copy number variant detection from high-throughput sequencing data
|
||||
keywords:
|
||||
- bam
|
||||
- fasta
|
||||
- copy number
|
||||
tools:
|
||||
- cnvkit:
|
||||
description: |
|
||||
CNVkit is a Python library and command-line software toolkit to infer and visualize copy number from high-throughput DNA sequencing data. It is designed for use with hybrid capture, including both whole-exome and custom target panels, and short-read sequencing platforms such as Illumina and Ion Torrent.
|
||||
homepage: https://cnvkit.readthedocs.io/en/stable/index.html
|
||||
documentation: https://cnvkit.readthedocs.io/en/stable/index.html
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- tumourbam:
|
||||
type: file
|
||||
description: |
|
||||
Input tumour sample bam file
|
||||
- normalbam:
|
||||
type: file
|
||||
description: |
|
||||
Input normal sample bam file
|
||||
- fasta:
|
||||
type: file
|
||||
description: |
|
||||
Input reference genome fasta file
|
||||
- targetfile:
|
||||
type: file
|
||||
description: |
|
||||
Input target bed file
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bed:
|
||||
type: file
|
||||
description: File containing genomic regions
|
||||
pattern: "*.{bed}"
|
||||
- cnn:
|
||||
type: file
|
||||
description: File containing coverage information
|
||||
pattern: "*.{cnn}"
|
||||
- cnr:
|
||||
type: file
|
||||
description: File containing copy number ratio information
|
||||
pattern: "*.{cnr}"
|
||||
- cns:
|
||||
type: file
|
||||
description: File containing copy number segment information
|
||||
pattern: "*.{cns}"
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
authors:
|
||||
- "@kaurravneet4123"
|
||||
- "@KevinMenden"
|
||||
- "@MaxUlysse"
|
||||
- "@drpatelh"
|
||||
|
|
@ -11,27 +11,6 @@ tools:
|
|||
Cutadapt finds and removes adapter sequences, primers, poly-A tails and other types of unwanted sequence from your high-throughput sequencing reads.
|
||||
documentation: https://cutadapt.readthedocs.io/en/stable/index.html
|
||||
doi: DOI:10.14806/ej.17.1.200
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -11,11 +11,11 @@ process DSH_FILTERBED {
|
|||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::dsh-bio=2.0" : null)
|
||||
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.3" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/dsh-bio:2.0--0"
|
||||
container "https://depot.galaxyproject.org/singularity/dsh-bio:2.0.3--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/dsh-bio:2.0--0"
|
||||
container "quay.io/biocontainers/dsh-bio:2.0.3--0"
|
||||
}
|
||||
|
||||
input:
|
||||
|
|
|
@ -10,27 +10,6 @@ tools:
|
|||
or later.
|
||||
homepage: https://github.com/heuermh/dishevelled-bio
|
||||
documentation: https://github.com/heuermh/dishevelled-bio
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -11,11 +11,11 @@ process DSH_SPLITBED {
|
|||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::dsh-bio=2.0" : null)
|
||||
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.3" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/dsh-bio:2.0--0"
|
||||
container "https://depot.galaxyproject.org/singularity/dsh-bio:2.0.3--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/dsh-bio:2.0--0"
|
||||
container "quay.io/biocontainers/dsh-bio:2.0.3--0"
|
||||
}
|
||||
|
||||
input:
|
||||
|
|
|
@ -10,27 +10,6 @@ tools:
|
|||
or later.
|
||||
homepage: https://github.com/heuermh/dishevelled-bio
|
||||
documentation: https://github.com/heuermh/dishevelled-bio
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -10,28 +10,6 @@ tools:
|
|||
A tool designed to provide fast all-in-one preprocessing for FastQ files. This tool is developed in C++ with multithreading supported to afford high performance.
|
||||
documentation: https://github.com/OpenGene/fastp
|
||||
doi: https://doi.org/10.1093/bioinformatics/bty560
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -15,27 +15,6 @@ tools:
|
|||
overrepresented sequences.
|
||||
homepage: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/
|
||||
documentation: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/Help/
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
60
software/fgbio/callmolecularconsensusreads/functions.nf
Normal file
60
software/fgbio/callmolecularconsensusreads/functions.nf
Normal file
|
@ -0,0 +1,60 @@
|
|||
|
||||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", '') } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
38
software/fgbio/callmolecularconsensusreads/main.nf
Normal file
38
software/fgbio/callmolecularconsensusreads/main.nf
Normal file
|
@ -0,0 +1,38 @@
|
|||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process FGBIO_CALLMOLECULARCONSENSUSREADS {
|
||||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::fgbio=1.3.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/fgbio:1.3.0--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/fgbio:1.3.0--0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.bam"), emit: bam
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}${options.suffix}" : "${meta.id}"
|
||||
"""
|
||||
fgbio \\
|
||||
CallMolecularConsensusReads \\
|
||||
-i $bam \\
|
||||
$options.args \\
|
||||
-o ${prefix}.bam
|
||||
fgbio --version | sed -e "s/fgbio v//g" > ${software}.version.txt
|
||||
"""
|
||||
}
|
45
software/fgbio/callmolecularconsensusreads/meta.yml
Normal file
45
software/fgbio/callmolecularconsensusreads/meta.yml
Normal file
|
@ -0,0 +1,45 @@
|
|||
name: fgbio_callmolecularconsensusreads
|
||||
description: Calls consensus sequences from reads with the same unique molecular tag.
|
||||
|
||||
keywords:
|
||||
- UMIs
|
||||
- consensus sequence
|
||||
- bam
|
||||
- sam
|
||||
tools:
|
||||
- fgbio:
|
||||
description: Tools for working with genomic and high throughput sequencing data.
|
||||
homepage: https://github.com/fulcrumgenomics/fgbio
|
||||
documentation: http://fulcrumgenomics.github.io/fgbio/
|
||||
licence: ['MIT']
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false, collapse:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: |
|
||||
The input SAM or BAM file.
|
||||
pattern: "*.{bam,sam}"
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: |
|
||||
Output SAM or BAM file to write consensus reads.
|
||||
pattern: "*.{bam,sam}"
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
|
||||
authors:
|
||||
- "@sruthipsuresh"
|
60
software/fgbio/sortbam/functions.nf
Normal file
60
software/fgbio/sortbam/functions.nf
Normal file
|
@ -0,0 +1,60 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.args3 = args.args3 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
39
software/fgbio/sortbam/main.nf
Normal file
39
software/fgbio/sortbam/main.nf
Normal file
|
@ -0,0 +1,39 @@
|
|||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process FGBIO_SORTBAM {
|
||||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::fgbio=1.3.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/fgbio:1.3.0--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/fgbio:1.3.0--0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.bam"), emit: bam
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}${options.suffix}" : "${meta.id}"
|
||||
"""
|
||||
fgbio \\
|
||||
SortBam \\
|
||||
-i $bam \\
|
||||
$options.args \\
|
||||
-o ${prefix}.bam
|
||||
fgbio --version | sed -e "s/fgbio v//g" > ${software}.version.txt
|
||||
"""
|
||||
}
|
||||
|
43
software/fgbio/sortbam/meta.yml
Normal file
43
software/fgbio/sortbam/meta.yml
Normal file
|
@ -0,0 +1,43 @@
|
|||
name: fgbio_sortbam
|
||||
description: Sorts a SAM or BAM file. Several sort orders are available, including coordinate, queryname, random, and randomquery.
|
||||
keywords:
|
||||
- sort
|
||||
- bam
|
||||
- sam
|
||||
tools:
|
||||
- fgbio:
|
||||
description: Tools for working with genomic and high throughput sequencing data.
|
||||
homepage: https://github.com/fulcrumgenomics/fgbio
|
||||
documentation: http://fulcrumgenomics.github.io/fgbio/
|
||||
licence: ['MIT']
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false, collapse:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: |
|
||||
The input SAM or BAM file to be sorted.
|
||||
pattern: "*.{bam,sam}"
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: |
|
||||
Output SAM or BAM file.
|
||||
pattern: "*.{bam,sam}"
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
|
||||
authors:
|
||||
- "@sruthipsuresh"
|
61
software/flash/functions.nf
Normal file
61
software/flash/functions.nf
Normal file
|
@ -0,0 +1,61 @@
|
|||
|
||||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.args3 = args.args3 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", '') } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
40
software/flash/main.nf
Normal file
40
software/flash/main.nf
Normal file
|
@ -0,0 +1,40 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process FLASH {
|
||||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
conda (params.enable_conda ? "bioconda::flash=1.2.11" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/flash:1.2.11--hed695b0_5"
|
||||
} else {
|
||||
container "quay.io/biocontainers/flash:1.2.11--hed695b0_5"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(reads)
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.merged.*.fastq.gz"), emit: reads
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}${options.suffix}" : "${meta.id}"
|
||||
def merged = "-o ${prefix}.merged"
|
||||
def input_reads = "${reads[0]} ${reads[1]}"
|
||||
"""
|
||||
flash \\
|
||||
$options.args \\
|
||||
$merged \\
|
||||
-z \\
|
||||
$input_reads
|
||||
echo \$(flash --version) > ${software}.version.txt
|
||||
"""
|
||||
}
|
44
software/flash/meta.yml
Normal file
44
software/flash/meta.yml
Normal file
|
@ -0,0 +1,44 @@
|
|||
name: flash
|
||||
description: Perform merging of mate paired-end sequencing reads
|
||||
keywords:
|
||||
- sort
|
||||
- reads merging
|
||||
- merge mate pairs
|
||||
tools:
|
||||
- flash:
|
||||
description: |
|
||||
Merge mates from fragments that are shorter than twice the read length
|
||||
homepage: https://ccb.jhu.edu/software/FLASH/
|
||||
documentation: {}
|
||||
doi: 10.1093/bioinformatics/btr507
|
||||
licence: ['GPL v3+']
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- reads:
|
||||
type: file
|
||||
description: |
|
||||
List of input FastQ files of size 2; i.e., paired-end data.
|
||||
pattern: "*fastq.gz"
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- reads:
|
||||
type: file
|
||||
description: The merged fastq reads
|
||||
pattern: "*fastq.gz"
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
|
||||
authors:
|
||||
- "@Erkison"
|
60
software/gatk4/applybqsr/functions.nf
Normal file
60
software/gatk4/applybqsr/functions.nf
Normal file
|
@ -0,0 +1,60 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.args3 = args.args3 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
48
software/gatk4/applybqsr/main.nf
Normal file
48
software/gatk4/applybqsr/main.nf
Normal file
|
@ -0,0 +1,48 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process GATK4_APPLYBQSR {
|
||||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::gatk4=4.2.0.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/gatk4:4.2.0.0--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/gatk4:4.2.0.0--0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam), path(bqsr_table)
|
||||
path fasta
|
||||
path fastaidx
|
||||
path dict
|
||||
path intervalsBed
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.bam"), emit: bam
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}${options.suffix}" : "${meta.id}"
|
||||
def intervalsCommand = intervalsBed ? "-L ${intervalsBed}" : ""
|
||||
|
||||
"""
|
||||
gatk ApplyBQSR \\
|
||||
-R $fasta \\
|
||||
-I $bam \\
|
||||
--bqsr-recal-file $bqsr_table \\
|
||||
$intervalsCommand \\
|
||||
-O ${prefix}.bam \\
|
||||
$options.args
|
||||
|
||||
gatk --version | grep Picard | sed "s/Picard Version: //g" > ${software}.version.txt
|
||||
"""
|
||||
}
|
58
software/gatk4/applybqsr/meta.yml
Normal file
58
software/gatk4/applybqsr/meta.yml
Normal file
|
@ -0,0 +1,58 @@
|
|||
name: gatk4_applybqsr
|
||||
description: Apply base quality score recalibration (BQSR) to a bam file
|
||||
keywords:
|
||||
- bqsr
|
||||
- bam
|
||||
tools:
|
||||
- gatk4:
|
||||
description: |
|
||||
Developed in the Data Sciences Platform at the Broad Institute, the toolkit offers a wide variety of tools
|
||||
with a primary focus on variant discovery and genotyping. Its powerful processing engine
|
||||
and high-performance computing features make it capable of taking on projects of any size.
|
||||
homepage: https://gatk.broadinstitute.org/hc/en-us
|
||||
documentation: https://gatk.broadinstitute.org/hc/en-us/categories/360002369672s
|
||||
doi: 10.1158/1538-7445.AM2017-3590
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: BAM file from alignment
|
||||
pattern: "*.{bam}"
|
||||
- bqsr_table:
|
||||
type: file
|
||||
description: Recalibration table from gatk4_baserecalibrator
|
||||
- fasta:
|
||||
type: file
|
||||
description: The reference fasta file
|
||||
- fastaidx:
|
||||
type: file
|
||||
description: Index of reference fasta file
|
||||
- dict:
|
||||
type: file
|
||||
description: GATK sequence dictionary
|
||||
- intervalsBed:
|
||||
type: file
|
||||
description: Bed file with the genomic regions included in the library (optional)
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
- bam:
|
||||
type: file
|
||||
description: Recalibrated BAM file
|
||||
pattern: "*.{bam}"
|
||||
|
||||
authors:
|
||||
- "@yocra3"
|
60
software/gatk4/baserecalibrator/functions.nf
Normal file
60
software/gatk4/baserecalibrator/functions.nf
Normal file
|
@ -0,0 +1,60 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.args3 = args.args3 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
50
software/gatk4/baserecalibrator/main.nf
Normal file
50
software/gatk4/baserecalibrator/main.nf
Normal file
|
@ -0,0 +1,50 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process GATK4_BASERECALIBRATOR {
|
||||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::gatk4=4.2.0.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/gatk4:4.2.0.0--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/gatk4:4.2.0.0--0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
path fasta
|
||||
path fastaidx
|
||||
path dict
|
||||
path intervalsBed
|
||||
path knownSites
|
||||
path knownSites_tbi
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.table"), emit: table
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}${options.suffix}" : "${meta.id}"
|
||||
def intervalsCommand = intervalsBed ? "-L ${intervalsBed}" : ""
|
||||
def sitesCommand = knownSites.collect{"--known-sites ${it}"}.join(' ')
|
||||
"""
|
||||
gatk BaseRecalibrator \
|
||||
-R $fasta \
|
||||
-I $bam \
|
||||
$sitesCommand \
|
||||
$intervalsCommand \
|
||||
$options.args \
|
||||
-O ${prefix}.table
|
||||
|
||||
gatk --version | grep Picard | sed "s/Picard Version: //g" > ${software}.version.txt
|
||||
"""
|
||||
}
|
58
software/gatk4/baserecalibrator/meta.yml
Normal file
58
software/gatk4/baserecalibrator/meta.yml
Normal file
|
@ -0,0 +1,58 @@
|
|||
name: gatk4_baserecalibrator
|
||||
description: Generate recalibration table for Base Quality Score Recalibration (BQSR)
|
||||
keywords:
|
||||
- sort
|
||||
tools:
|
||||
- gatk4:
|
||||
description: |
|
||||
Developed in the Data Sciences Platform at the Broad Institute, the toolkit offers a wide variety of tools
|
||||
with a primary focus on variant discovery and genotyping. Its powerful processing engine
|
||||
and high-performance computing features make it capable of taking on projects of any size.
|
||||
homepage: https://gatk.broadinstitute.org/hc/en-us
|
||||
documentation: https://gatk.broadinstitute.org/hc/en-us/categories/360002369672s
|
||||
doi: 10.1158/1538-7445.AM2017-3590
|
||||
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: BAM file from alignment
|
||||
pattern: "*.{bam}"
|
||||
- fasta:
|
||||
type: file
|
||||
description: The reference fasta file
|
||||
- fastaidx:
|
||||
type: file
|
||||
description: Index of reference fasta file
|
||||
- dict:
|
||||
type: file
|
||||
description: GATK sequence dictionary
|
||||
- intervalsBed:
|
||||
type: file
|
||||
description: Bed file with the genomic regions included in the library (optional)
|
||||
- knownSites:
|
||||
type: file
|
||||
description: Bed file with the genomic regions included in the library (optional)
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
- table:
|
||||
type: file
|
||||
description: Recalibration table from BaseRecalibrator
|
||||
pattern: "*.{table}"
|
||||
|
||||
authors:
|
||||
- "@yocra3"
|
|
@ -12,27 +12,6 @@ tools:
|
|||
homepage: https://gatk.broadinstitute.org/hc/en-us
|
||||
documentation: https://gatk.broadinstitute.org/hc/en-us/categories/360002369672s
|
||||
doi: 10.1158/1538-7445.AM2017-3590
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
|
|
|
@ -12,27 +12,6 @@ tools:
|
|||
homepage: https://gatk.broadinstitute.org/hc/en-us
|
||||
documentation: https://gatk.broadinstitute.org/hc/en-us/categories/360002369672s
|
||||
doi: 10.1158/1538-7445.AM2017-3590
|
||||
params:
|
||||
- outdir:
|
||||
type: string
|
||||
description: |
|
||||
The pipeline's output directory. By default, the module will
|
||||
output files into `$params.outdir/<SOFTWARE>`
|
||||
- publish_dir_mode:
|
||||
type: string
|
||||
description: |
|
||||
Value for the Nextflow `publishDir` mode parameter.
|
||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
||||
- enable_conda:
|
||||
type: boolean
|
||||
description: |
|
||||
Run the module with Conda using the software specified
|
||||
via the `conda` directive
|
||||
- singularity_pull_docker_container:
|
||||
type: boolean
|
||||
description: |
|
||||
Instead of directly downloading Singularity images for use with Singularity,
|
||||
force the workflow to pull and convert Docker containers instead.
|
||||
input:
|
||||
- fasta:
|
||||
type: file
|
||||
|
|
59
software/gatk4/fastqtosam/functions.nf
Normal file
59
software/gatk4/fastqtosam/functions.nf
Normal file
|
@ -0,0 +1,59 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
40
software/gatk4/fastqtosam/main.nf
Normal file
40
software/gatk4/fastqtosam/main.nf
Normal file
|
@ -0,0 +1,40 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process GATK4_FASTQTOSAM {
|
||||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::gatk4=4.2.0.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/gatk4:4.2.0.0--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/gatk4:4.2.0.0--0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(reads)
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.bam"), emit: bam
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}${options.suffix}" : "${meta.id}"
|
||||
def read_files = meta.single_end ? "-F1 $reads" : "-F1 ${reads[0]} -F2 ${reads[1]}"
|
||||
"""
|
||||
gatk FastqToSam \\
|
||||
$read_files \\
|
||||
-O ${prefix}.bam \\
|
||||
-SM $prefix \\
|
||||
$options.args
|
||||
gatk --version | grep Picard | sed "s/Picard Version: //g" > ${software}.version.txt
|
||||
"""
|
||||
}
|
47
software/gatk4/fastqtosam/meta.yml
Normal file
47
software/gatk4/fastqtosam/meta.yml
Normal file
|
@ -0,0 +1,47 @@
|
|||
name: gatk4_fastqtosam
|
||||
description: Converts FastQ file to BAM format
|
||||
keywords:
|
||||
- bam
|
||||
- fastq
|
||||
- convert
|
||||
tools:
|
||||
- gatk4:
|
||||
description: Genome Analysis Toolkit (GATK4)
|
||||
Developed in the Data Sciences Platform at the Broad Institute, the toolkit offers a wide variety of tools
|
||||
with a primary focus on variant discovery and genotyping. Its powerful processing engine
|
||||
and high-performance computing features make it capable of taking on projects of any size.
|
||||
homepage: https://gatk.broadinstitute.org/hc/en-us
|
||||
documentation: https://gatk.broadinstitute.org/hc/en-us/categories/360002369672s
|
||||
tool_dev_url: https://github.com/broadinstitute/gatk
|
||||
doi: "10.1158/1538-7445.AM2017-3590"
|
||||
licence: ['BSD-3-clause']
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- reads:
|
||||
type: file
|
||||
description: List of input FastQ files of size 1 and 2 for single-end and paired-end data,
|
||||
respectively.
|
||||
pattern: "*.fastq.gz"
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
- bam:
|
||||
type: file
|
||||
description: Converted BAM file
|
||||
pattern: "*.bam"
|
||||
|
||||
authors:
|
||||
- "@ntoda03"
|
60
software/gatk4/haplotypecaller/functions.nf
Normal file
60
software/gatk4/haplotypecaller/functions.nf
Normal file
|
@ -0,0 +1,60 @@
|
|||
/*
|
||||
* -----------------------------------------------------
|
||||
* Utility functions used in nf-core DSL2 module files
|
||||
* -----------------------------------------------------
|
||||
*/
|
||||
|
||||
/*
|
||||
* Extract name of software tool from process name using $task.process
|
||||
*/
|
||||
def getSoftwareName(task_process) {
|
||||
return task_process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to initialise default values and to generate a Groovy Map of available options for nf-core modules
|
||||
*/
|
||||
def initOptions(Map args) {
|
||||
def Map options = [:]
|
||||
options.args = args.args ?: ''
|
||||
options.args2 = args.args2 ?: ''
|
||||
options.args3 = args.args3 ?: ''
|
||||
options.publish_by_id = args.publish_by_id ?: false
|
||||
options.publish_dir = args.publish_dir ?: ''
|
||||
options.publish_files = args.publish_files
|
||||
options.suffix = args.suffix ?: ''
|
||||
return options
|
||||
}
|
||||
|
||||
/*
|
||||
* Tidy up and join elements of a list to return a path string
|
||||
*/
|
||||
def getPathFromList(path_list) {
|
||||
def paths = path_list.findAll { item -> !item?.trim().isEmpty() } // Remove empty entries
|
||||
paths = paths.collect { it.trim().replaceAll("^[/]+|[/]+\$", "") } // Trim whitespace and trailing slashes
|
||||
return paths.join('/')
|
||||
}
|
||||
|
||||
/*
|
||||
* Function to save/publish module results
|
||||
*/
|
||||
def saveFiles(Map args) {
|
||||
if (!args.filename.endsWith('.version.txt')) {
|
||||
def ioptions = initOptions(args.options)
|
||||
def path_list = [ ioptions.publish_dir ?: args.publish_dir ]
|
||||
if (ioptions.publish_by_id) {
|
||||
path_list.add(args.publish_id)
|
||||
}
|
||||
if (ioptions.publish_files instanceof Map) {
|
||||
for (ext in ioptions.publish_files) {
|
||||
if (args.filename.endsWith(ext.key)) {
|
||||
def ext_list = path_list.collect()
|
||||
ext_list.add(ext.value)
|
||||
return "${getPathFromList(ext_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
} else if (ioptions.publish_files == null) {
|
||||
return "${getPathFromList(path_list)}/$args.filename"
|
||||
}
|
||||
}
|
||||
}
|
52
software/gatk4/haplotypecaller/main.nf
Normal file
52
software/gatk4/haplotypecaller/main.nf
Normal file
|
@ -0,0 +1,52 @@
|
|||
// Import generic module functions
|
||||
include { initOptions; saveFiles; getSoftwareName } from './functions'
|
||||
|
||||
params.options = [:]
|
||||
options = initOptions(params.options)
|
||||
|
||||
process GATK4_HAPLOTYPECALLER {
|
||||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
publishDir "${params.outdir}",
|
||||
mode: params.publish_dir_mode,
|
||||
saveAs: { filename -> saveFiles(filename:filename, options:params.options, publish_dir:getSoftwareName(task.process), publish_id:meta.id) }
|
||||
|
||||
conda (params.enable_conda ? "bioconda::gatk4=4.2.0.0" : null)
|
||||
if (workflow.containerEngine == 'singularity' && !params.singularity_pull_docker_container) {
|
||||
container "https://depot.galaxyproject.org/singularity/gatk4:4.2.0.0--0"
|
||||
} else {
|
||||
container "quay.io/biocontainers/gatk4:4.2.0.0--0"
|
||||
}
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam), path(bai)
|
||||
path fasta
|
||||
path fai
|
||||
path dict
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.vcf.gz"), emit: vcf
|
||||
tuple val(meta), path("*.tbi") , emit: tbi
|
||||
path "*.version.txt" , emit: version
|
||||
|
||||
script:
|
||||
def software = getSoftwareName(task.process)
|
||||
def prefix = options.suffix ? "${meta.id}${options.suffix}" : "${meta.id}"
|
||||
def avail_mem = 3
|
||||
if (!task.memory) {
|
||||
log.info '[GATK HaplotypeCaller] Available memory not known - defaulting to 3GB. Specify process memory requirements to change this.'
|
||||
} else {
|
||||
avail_mem = task.memory.giga
|
||||
}
|
||||
"""
|
||||
gatk \\
|
||||
--java-options "-Xmx${avail_mem}g" \\
|
||||
HaplotypeCaller \\
|
||||
-R $fasta \\
|
||||
-I $bam \\
|
||||
-O ${prefix}.vcf.gz \\
|
||||
$options.args
|
||||
|
||||
gatk --version | grep Picard | sed "s/Picard Version: //g" > ${software}.version.txt
|
||||
"""
|
||||
}
|
63
software/gatk4/haplotypecaller/meta.yml
Normal file
63
software/gatk4/haplotypecaller/meta.yml
Normal file
|
@ -0,0 +1,63 @@
|
|||
name: gatk4_haplotypecaller
|
||||
description: Call germline SNPs and indels via local re-assembly of haplotypes
|
||||
keywords:
|
||||
- gatk4
|
||||
- haplotypecaller
|
||||
- haplotype
|
||||
tools:
|
||||
- gatk4:
|
||||
description: |
|
||||
Developed in the Data Sciences Platform at the Broad Institute, the toolkit offers a wide variety of tools
|
||||
with a primary focus on variant discovery and genotyping. Its powerful processing engine
|
||||
and high-performance computing features make it capable of taking on projects of any size.
|
||||
homepage: https://gatk.broadinstitute.org/hc/en-us
|
||||
documentation: https://gatk.broadinstitute.org/hc/en-us/categories/360002369672s
|
||||
doi: 10.1158/1538-7445.AM2017-3590
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: BAM file
|
||||
pattern: "*.bam"
|
||||
- bai:
|
||||
type: file
|
||||
description: Index of BAM file
|
||||
pattern: "*.bam.bai"
|
||||
- fasta:
|
||||
type: file
|
||||
description: The reference fasta file
|
||||
pattern: "*.fasta"
|
||||
- fai:
|
||||
type: file
|
||||
description: Index of reference fasta file
|
||||
pattern: "fasta.fai"
|
||||
- dict:
|
||||
type: file
|
||||
description: GATK sequence dictionary
|
||||
pattern: "*.dict"
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- version:
|
||||
type: file
|
||||
description: File containing software version
|
||||
pattern: "*.{version.txt}"
|
||||
- vcf:
|
||||
type: file
|
||||
description: Compressed VCF file
|
||||
pattern: "*.vcf.gz"
|
||||
- tbi:
|
||||
type: file
|
||||
description: Index of VCF file
|
||||
pattern: "*.vcf.gz.tbi"
|
||||
|
||||
authors:
|
||||
- "@suzannejin"
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue