mirror of
https://github.com/MillironX/nf-core_modules.git
synced 2024-11-13 05:13:09 +00:00
Merge branch 'master' into rp3-add-shigatyper
This commit is contained in:
commit
d7fb969a74
85 changed files with 1314 additions and 293 deletions
64
.github/ISSUE_TEMPLATE/bug_report.md
vendored
64
.github/ISSUE_TEMPLATE/bug_report.md
vendored
|
@ -1,64 +0,0 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Report something that is broken or incorrect
|
||||
title: "[BUG]"
|
||||
---
|
||||
|
||||
<!--
|
||||
# nf-core/module bug report
|
||||
|
||||
Hi there!
|
||||
|
||||
Thanks for telling us about a problem with the modules.
|
||||
Please delete this text and anything that's not relevant from the template below:
|
||||
-->
|
||||
|
||||
## Check Documentation
|
||||
|
||||
I have checked the following places for your error:
|
||||
|
||||
- [ ] [nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)
|
||||
- [ ] [nf-core/module documentation](https://github.com/nf-core/modules/blob/master/README.md)
|
||||
|
||||
## Description of the bug
|
||||
|
||||
<!-- A clear and concise description of what the bug is. -->
|
||||
|
||||
## Steps to reproduce
|
||||
|
||||
Steps to reproduce the behaviour:
|
||||
|
||||
1. Command line: <!-- [e.g. `nextflow run ...`] -->
|
||||
2. See error: <!-- [Please provide your error message] -->
|
||||
|
||||
## Expected behaviour
|
||||
|
||||
<!-- A clear and concise description of what you expected to happen. -->
|
||||
|
||||
## Log files
|
||||
|
||||
Have you provided the following extra information/files:
|
||||
|
||||
- [ ] The command used to run the module
|
||||
- [ ] The `.nextflow.log` file <!-- this is a hidden file in the directory where you launched the module -->
|
||||
|
||||
## System
|
||||
|
||||
- Hardware: <!-- [e.g. HPC, Desktop, Cloud...] -->
|
||||
- Executor: <!-- [e.g. slurm, local, awsbatch...] -->
|
||||
- OS: <!-- [e.g. CentOS Linux, macOS, Linux Mint...] -->
|
||||
- Version <!-- [e.g. 7, 10.13.6, 18.3...] -->
|
||||
|
||||
## Nextflow Installation
|
||||
|
||||
- Version: <!-- [e.g. 19.10.0] -->
|
||||
|
||||
## Container engine
|
||||
|
||||
- Engine: <!-- [e.g. Conda, Docker, Singularity or Podman] -->
|
||||
- version: <!-- [e.g. 1.0.0] -->
|
||||
- Image tag: <!-- [e.g. nfcore/module:2.6] -->
|
||||
|
||||
## Additional context
|
||||
|
||||
<!-- Add any other context about the problem here. -->
|
52
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
52
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
|
@ -0,0 +1,52 @@
|
|||
name: Bug report
|
||||
description: Report something that is broken or incorrect
|
||||
labels: bug
|
||||
body:
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Have you checked the docs?
|
||||
description: I have checked the following places for my error
|
||||
options:
|
||||
- label: "[nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)"
|
||||
required: true
|
||||
- label: "[nf-core modules documentation](https://nf-co.re/docs/contributing/modules)"
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description of the bug
|
||||
description: A clear and concise description of what the bug is.
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: command_used
|
||||
attributes:
|
||||
label: Command used and terminal output
|
||||
description: Steps to reproduce the behaviour. Please paste the command you used to launch the pipeline and the output from your terminal.
|
||||
render: console
|
||||
placeholder: |
|
||||
$ nextflow run ...
|
||||
|
||||
Some output where something broke
|
||||
|
||||
- type: textarea
|
||||
id: files
|
||||
attributes:
|
||||
label: Relevant files
|
||||
description: |
|
||||
Please drag and drop the relevant files here. Create a `.zip` archive if the extension is not allowed.
|
||||
Your verbose log file `.nextflow.log` is often useful _(this is a hidden file in the directory where you launched the pipeline)_ as well as custom Nextflow configuration files.
|
||||
|
||||
- type: textarea
|
||||
id: system
|
||||
attributes:
|
||||
label: System information
|
||||
description: |
|
||||
* Nextflow version _(eg. 21.10.3)_
|
||||
* Hardware _(eg. HPC, Desktop, Cloud)_
|
||||
* Executor _(eg. slurm, local, awsbatch)_
|
||||
* Container engine and version: _(e.g. Docker 1.0.0, Singularity, Conda, Podman, Shifter or Charliecloud)_
|
||||
* OS and version: _(eg. CentOS Linux, macOS, Ubuntu 22.04)_
|
||||
* Image tag: <!-- [e.g. nfcore/cellranger:2.6] -->
|
32
.github/ISSUE_TEMPLATE/feature_request.md
vendored
32
.github/ISSUE_TEMPLATE/feature_request.md
vendored
|
@ -1,32 +0,0 @@
|
|||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for nf-core/modules
|
||||
title: "[FEATURE]"
|
||||
---
|
||||
|
||||
<!--
|
||||
# nf-core/modules feature request
|
||||
|
||||
Hi there!
|
||||
|
||||
Thanks for suggesting a new feature for the modules!
|
||||
Please delete this text and anything that's not relevant from the template below:
|
||||
-->
|
||||
|
||||
## Is your feature request related to a problem? Please describe
|
||||
|
||||
<!-- A clear and concise description of what the problem is. -->
|
||||
|
||||
<!-- e.g. [I'm always frustrated when ...] -->
|
||||
|
||||
## Describe the solution you'd like
|
||||
|
||||
<!-- A clear and concise description of what you want to happen. -->
|
||||
|
||||
## Describe alternatives you've considered
|
||||
|
||||
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
|
||||
|
||||
## Additional context
|
||||
|
||||
<!-- Add any other context about the feature request here. -->
|
32
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
32
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
|
@ -0,0 +1,32 @@
|
|||
name: Feature request
|
||||
description: Suggest an idea for nf-core/modules
|
||||
labels: feature
|
||||
title: "[FEATURE]"
|
||||
body:
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Is your feature request related to a problem? Please describe
|
||||
description: A clear and concise description of what the bug is.
|
||||
placeholder: |
|
||||
<!-- e.g. [I'm always frustrated when ...] -->
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: solution
|
||||
attributes:
|
||||
label: Describe the solution you'd like
|
||||
description: A clear and concise description of the solution you want to happen.
|
||||
|
||||
- type: textarea
|
||||
id: alternatives
|
||||
attributes:
|
||||
label: Describe alternatives you've considered
|
||||
description: A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
- type: textarea
|
||||
id: additional_context
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: Add any other context about the feature request here.
|
26
.github/ISSUE_TEMPLATE/new_module.md
vendored
26
.github/ISSUE_TEMPLATE/new_module.md
vendored
|
@ -1,26 +0,0 @@
|
|||
---
|
||||
name: New module
|
||||
about: Suggest a new module for nf-core/modules
|
||||
title: "new module: TOOL/SUBTOOL"
|
||||
label: new module
|
||||
---
|
||||
|
||||
<!--
|
||||
# nf-core/modules new module suggestion
|
||||
|
||||
Hi there!
|
||||
|
||||
Thanks for suggesting a new module for the modules!
|
||||
Please delete this text and anything that's not relevant from the template below:
|
||||
|
||||
Replace TOOL with the bioconda name for the tool in the following text, so that the link is functional.
|
||||
|
||||
Replace TOOL/SUBTOOL in the issue title so that it's understandable.
|
||||
-->
|
||||
|
||||
I think it would be good to have a module for [TOOL](https://bioconda.github.io/recipes/TOOL/README.html)
|
||||
|
||||
- [ ] This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
|
||||
- [ ] There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
|
||||
- [ ] There is no [open issue](https://github.com/nf-core/modules/issues) for this module
|
||||
- [ ] If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
|
36
.github/ISSUE_TEMPLATE/new_module.yml
vendored
Normal file
36
.github/ISSUE_TEMPLATE/new_module.yml
vendored
Normal file
|
@ -0,0 +1,36 @@
|
|||
name: New module
|
||||
description: Suggest a new module for nf-core/modules
|
||||
title: "new module: TOOL/SUBTOOL"
|
||||
labels: new module
|
||||
body:
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Is there an existing module for this?
|
||||
description: This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
|
||||
options:
|
||||
- label: I have searched for the existing module
|
||||
required: true
|
||||
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Is there an open PR for this?
|
||||
description: There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
|
||||
options:
|
||||
- label: I have searched for existing PRs
|
||||
required: true
|
||||
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Is there an open issue for this?
|
||||
description: There is no [open issue](https://github.com/nf-core/modules/issues) for this module
|
||||
options:
|
||||
- label: I have searched for existing issues
|
||||
required: true
|
||||
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Are you going to work on this?
|
||||
description: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
|
||||
options:
|
||||
- label: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
|
||||
required: false
|
|
@ -27,9 +27,7 @@ process ANTISMASH_ANTISMASHLITEDOWNLOADDATABASES {
|
|||
|
||||
output:
|
||||
path("antismash_db") , emit: database
|
||||
path("css"), emit: css_dir
|
||||
path("detection"), emit: detection_dir
|
||||
path("modules"), emit: modules_dir
|
||||
path("antismash_dir"), emit: antismash_dir
|
||||
path "versions.yml", emit: versions
|
||||
|
||||
when:
|
||||
|
@ -37,11 +35,19 @@ process ANTISMASH_ANTISMASHLITEDOWNLOADDATABASES {
|
|||
|
||||
script:
|
||||
def args = task.ext.args ?: ''
|
||||
conda = params.enable_conda
|
||||
"""
|
||||
download-antismash-databases \\
|
||||
--database-dir antismash_db \\
|
||||
$args
|
||||
|
||||
if [[ $conda = false ]]; \
|
||||
then \
|
||||
cp -r /usr/local/lib/python3.8/site-packages/antismash antismash_dir; \
|
||||
else \
|
||||
cp -r \$(python -c 'import antismash;print(antismash.__file__.split("/__")[0])') antismash_dir; \
|
||||
fi
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
antismash-lite: \$(antismash --version | sed 's/antiSMASH //')
|
||||
|
|
|
@ -50,21 +50,11 @@ output:
|
|||
type: directory
|
||||
description: Download directory for antiSMASH databases
|
||||
pattern: "antismash_db"
|
||||
- css_dir:
|
||||
- antismash_dir:
|
||||
type: directory
|
||||
description: |
|
||||
antismash/outputs/html/css folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
|
||||
pattern: "css"
|
||||
- detection_dir:
|
||||
type: directory
|
||||
description: |
|
||||
antismash/detection folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
|
||||
pattern: "detection"
|
||||
- modules_dir:
|
||||
type: directory
|
||||
description: |
|
||||
antismash/modules folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
|
||||
pattern: "modules"
|
||||
antismash installation folder which is being modified during the antiSMASH database downloading step. The modified files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database and installation folder in pipelines.
|
||||
pattern: "antismash_dir"
|
||||
|
||||
authors:
|
||||
- "@jasmezz"
|
||||
|
|
|
@ -2,10 +2,10 @@ process BAMTOOLS_SPLIT {
|
|||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::bamtools=2.5.1" : null)
|
||||
conda (params.enable_conda ? "bioconda::bamtools=2.5.2" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/bamtools:2.5.1--h9a82719_9' :
|
||||
'quay.io/biocontainers/bamtools:2.5.1--h9a82719_9' }"
|
||||
'https://depot.galaxyproject.org/singularity/bamtools:2.5.2--hd03093a_0' :
|
||||
'quay.io/biocontainers/bamtools:2.5.2--hd03093a_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
@ -20,11 +20,15 @@ process BAMTOOLS_SPLIT {
|
|||
script:
|
||||
def args = task.ext.args ?: ''
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
def input_list = bam.collect{"-in $it"}.join(' ')
|
||||
"""
|
||||
bamtools \\
|
||||
split \\
|
||||
-in $bam \\
|
||||
$args
|
||||
merge \\
|
||||
$input_list \\
|
||||
| bamtools \\
|
||||
split \\
|
||||
-stub $prefix \\
|
||||
$args
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
|
|
|
@ -23,7 +23,7 @@ input:
|
|||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: A BAM file to split
|
||||
description: A list of one or more BAM files to merge and then split
|
||||
pattern: "*.bam"
|
||||
|
||||
output:
|
||||
|
@ -43,3 +43,4 @@ output:
|
|||
|
||||
authors:
|
||||
- "@sguizard"
|
||||
- "@matthdsm"
|
||||
|
|
|
@ -2,20 +2,26 @@ process DIAMOND_BLASTP {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
// Dimaond is limited to v2.0.9 because there is not a
|
||||
// singularity version higher than this at the current time.
|
||||
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
|
||||
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
|
||||
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
|
||||
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(fasta)
|
||||
path db
|
||||
path db
|
||||
val out_ext
|
||||
val blast_columns
|
||||
|
||||
output:
|
||||
tuple val(meta), path('*.txt'), emit: txt
|
||||
path "versions.yml" , emit: versions
|
||||
tuple val(meta), path('*.blast'), optional: true, emit: blast
|
||||
tuple val(meta), path('*.xml') , optional: true, emit: xml
|
||||
tuple val(meta), path('*.txt') , optional: true, emit: txt
|
||||
tuple val(meta), path('*.daa') , optional: true, emit: daa
|
||||
tuple val(meta), path('*.sam') , optional: true, emit: sam
|
||||
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
|
||||
tuple val(meta), path('*.paf') , optional: true, emit: paf
|
||||
path "versions.yml" , emit: versions
|
||||
|
||||
when:
|
||||
task.ext.when == null || task.ext.when
|
||||
|
@ -23,6 +29,21 @@ process DIAMOND_BLASTP {
|
|||
script:
|
||||
def args = task.ext.args ?: ''
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
def columns = blast_columns ? "${blast_columns}" : ''
|
||||
switch ( out_ext ) {
|
||||
case "blast": outfmt = 0; break
|
||||
case "xml": outfmt = 5; break
|
||||
case "txt": outfmt = 6; break
|
||||
case "daa": outfmt = 100; break
|
||||
case "sam": outfmt = 101; break
|
||||
case "tsv": outfmt = 102; break
|
||||
case "paf": outfmt = 103; break
|
||||
default:
|
||||
outfmt = '6';
|
||||
out_ext = 'txt';
|
||||
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
|
||||
break
|
||||
}
|
||||
"""
|
||||
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
|
||||
|
||||
|
@ -31,8 +52,9 @@ process DIAMOND_BLASTP {
|
|||
--threads $task.cpus \\
|
||||
--db \$DB \\
|
||||
--query $fasta \\
|
||||
--outfmt ${outfmt} ${columns} \\
|
||||
$args \\
|
||||
--out ${prefix}.txt
|
||||
--out ${prefix}.${out_ext}
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
|
|
|
@ -28,12 +28,50 @@ input:
|
|||
type: directory
|
||||
description: Directory containing the protein blast database
|
||||
pattern: "*"
|
||||
- out_ext:
|
||||
type: string
|
||||
description: |
|
||||
Specify the type of output file to be generated. `blast` corresponds to
|
||||
BLAST pairwise format. `xml` corresponds to BLAST xml format.
|
||||
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
|
||||
taxonomic classification format.
|
||||
pattern: "blast|xml|txt|daa|sam|tsv|paf"
|
||||
- blast_columns:
|
||||
type: string
|
||||
description: |
|
||||
Optional space separated list of DIAMOND tabular BLAST output keywords
|
||||
used for in conjunction with the 'txt' out_ext option (--outfmt 6). See
|
||||
DIAMOND documnetation for more information.
|
||||
|
||||
output:
|
||||
- txt:
|
||||
- blast:
|
||||
type: file
|
||||
description: File containing blastp hits
|
||||
pattern: "*.{blastp.txt}"
|
||||
pattern: "*.{blast}"
|
||||
- xml:
|
||||
type: file
|
||||
description: File containing blastp hits
|
||||
pattern: "*.{xml}"
|
||||
- txt:
|
||||
type: file
|
||||
description: File containing hits in tabular BLAST format.
|
||||
pattern: "*.{txt}"
|
||||
- daa:
|
||||
type: file
|
||||
description: File containing hits DAA format
|
||||
pattern: "*.{daa}"
|
||||
- sam:
|
||||
type: file
|
||||
description: File containing aligned reads in SAM format
|
||||
pattern: "*.{sam}"
|
||||
- tsv:
|
||||
type: file
|
||||
description: Tab separated file containing taxonomic classification of hits
|
||||
pattern: "*.{tsv}"
|
||||
- paf:
|
||||
type: file
|
||||
description: File containing aligned reads in pairwise mapping format format
|
||||
pattern: "*.{paf}"
|
||||
- versions:
|
||||
type: file
|
||||
description: File containing software versions
|
||||
|
@ -41,3 +79,4 @@ output:
|
|||
|
||||
authors:
|
||||
- "@spficklin"
|
||||
- "@jfy133"
|
||||
|
|
|
@ -2,20 +2,26 @@ process DIAMOND_BLASTX {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
// Dimaond is limited to v2.0.9 because there is not a
|
||||
// singularity version higher than this at the current time.
|
||||
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
|
||||
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
|
||||
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
|
||||
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(fasta)
|
||||
path db
|
||||
path db
|
||||
val out_ext
|
||||
val blast_columns
|
||||
|
||||
output:
|
||||
tuple val(meta), path('*.txt'), emit: txt
|
||||
path "versions.yml" , emit: versions
|
||||
tuple val(meta), path('*.blast'), optional: true, emit: blast
|
||||
tuple val(meta), path('*.xml') , optional: true, emit: xml
|
||||
tuple val(meta), path('*.txt') , optional: true, emit: txt
|
||||
tuple val(meta), path('*.daa') , optional: true, emit: daa
|
||||
tuple val(meta), path('*.sam') , optional: true, emit: sam
|
||||
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
|
||||
tuple val(meta), path('*.paf') , optional: true, emit: paf
|
||||
path "versions.yml" , emit: versions
|
||||
|
||||
when:
|
||||
task.ext.when == null || task.ext.when
|
||||
|
@ -23,6 +29,21 @@ process DIAMOND_BLASTX {
|
|||
script:
|
||||
def args = task.ext.args ?: ''
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
def columns = blast_columns ? "${blast_columns}" : ''
|
||||
switch ( out_ext ) {
|
||||
case "blast": outfmt = 0; break
|
||||
case "xml": outfmt = 5; break
|
||||
case "txt": outfmt = 6; break
|
||||
case "daa": outfmt = 100; break
|
||||
case "sam": outfmt = 101; break
|
||||
case "tsv": outfmt = 102; break
|
||||
case "paf": outfmt = 103; break
|
||||
default:
|
||||
outfmt = '6';
|
||||
out_ext = 'txt';
|
||||
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
|
||||
break
|
||||
}
|
||||
"""
|
||||
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
|
||||
|
||||
|
@ -31,8 +52,9 @@ process DIAMOND_BLASTX {
|
|||
--threads $task.cpus \\
|
||||
--db \$DB \\
|
||||
--query $fasta \\
|
||||
--outfmt ${outfmt} ${columns} \\
|
||||
$args \\
|
||||
--out ${prefix}.txt
|
||||
--out ${prefix}.${out_ext}
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
|
|
|
@ -28,12 +28,44 @@ input:
|
|||
type: directory
|
||||
description: Directory containing the nucelotide blast database
|
||||
pattern: "*"
|
||||
- out_ext:
|
||||
type: string
|
||||
description: |
|
||||
Specify the type of output file to be generated. `blast` corresponds to
|
||||
BLAST pairwise format. `xml` corresponds to BLAST xml format.
|
||||
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
|
||||
taxonomic classification format.
|
||||
pattern: "blast|xml|txt|daa|sam|tsv|paf"
|
||||
|
||||
output:
|
||||
- blast:
|
||||
type: file
|
||||
description: File containing blastp hits
|
||||
pattern: "*.{blast}"
|
||||
- xml:
|
||||
type: file
|
||||
description: File containing blastp hits
|
||||
pattern: "*.{xml}"
|
||||
- txt:
|
||||
type: file
|
||||
description: File containing blastx hits
|
||||
pattern: "*.{blastx.txt}"
|
||||
description: File containing hits in tabular BLAST format.
|
||||
pattern: "*.{txt}"
|
||||
- daa:
|
||||
type: file
|
||||
description: File containing hits DAA format
|
||||
pattern: "*.{daa}"
|
||||
- sam:
|
||||
type: file
|
||||
description: File containing aligned reads in SAM format
|
||||
pattern: "*.{sam}"
|
||||
- tsv:
|
||||
type: file
|
||||
description: Tab separated file containing taxonomic classification of hits
|
||||
pattern: "*.{tsv}"
|
||||
- paf:
|
||||
type: file
|
||||
description: File containing aligned reads in pairwise mapping format format
|
||||
pattern: "*.{paf}"
|
||||
- versions:
|
||||
type: file
|
||||
description: File containing software versions
|
||||
|
@ -41,3 +73,4 @@ output:
|
|||
|
||||
authors:
|
||||
- "@spficklin"
|
||||
- "@jfy133"
|
||||
|
|
|
@ -2,12 +2,10 @@ process DIAMOND_MAKEDB {
|
|||
tag "$fasta"
|
||||
label 'process_medium'
|
||||
|
||||
// Dimaond is limited to v2.0.9 because there is not a
|
||||
// singularity version higher than this at the current time.
|
||||
conda (params.enable_conda ? 'bioconda::diamond=2.0.9' : null)
|
||||
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
|
||||
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
|
||||
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
|
||||
|
||||
input:
|
||||
path fasta
|
||||
|
|
43
modules/elprep/merge/main.nf
Normal file
43
modules/elprep/merge/main.nf
Normal file
|
@ -0,0 +1,43 @@
|
|||
process ELPREP_MERGE {
|
||||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::elprep=5.1.2" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/elprep:5.1.2--he881be0_0':
|
||||
'quay.io/biocontainers/elprep:5.1.2--he881be0_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
||||
output:
|
||||
tuple val(meta), path("output/**.{bam,sam}") , emit: bam
|
||||
path "versions.yml" , emit: versions
|
||||
|
||||
when:
|
||||
task.ext.when == null || task.ext.when
|
||||
|
||||
script:
|
||||
def args = task.ext.args ?: ''
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
def suffix = args.contains("--output-type sam") ? "sam" : "bam"
|
||||
def single_end = meta.single_end ? " --single-end" : ""
|
||||
|
||||
"""
|
||||
# create directory and move all input so elprep can find and merge them before splitting
|
||||
mkdir input
|
||||
mv ${bam} input/
|
||||
|
||||
elprep merge \\
|
||||
input/ \\
|
||||
output/${prefix}.${suffix} \\
|
||||
$args \\
|
||||
${single_end} \\
|
||||
--nr-of-threads $task.cpus
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
elprep: \$(elprep 2>&1 | head -n2 | tail -n1 |sed 's/^.*version //;s/ compiled.*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
}
|
44
modules/elprep/merge/meta.yml
Normal file
44
modules/elprep/merge/meta.yml
Normal file
|
@ -0,0 +1,44 @@
|
|||
name: "elprep_merge"
|
||||
description: Merge split bam/sam chunks in one file
|
||||
keywords:
|
||||
- bam
|
||||
- sam
|
||||
- merge
|
||||
tools:
|
||||
- "elprep":
|
||||
description: "elPrep is a high-performance tool for preparing .sam/.bam files for variant calling in sequencing pipelines. It can be used as a drop-in replacement for SAMtools/Picard/GATK4."
|
||||
homepage: "https://github.com/ExaScience/elprep"
|
||||
documentation: "https://github.com/ExaScience/elprep"
|
||||
tool_dev_url: "https://github.com/ExaScience/elprep"
|
||||
doi: "10.1371/journal.pone.0244471"
|
||||
licence: "['AGPL v3']"
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: List of BAM/SAM chunks to merge
|
||||
pattern: "*.{bam,sam}"
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
#
|
||||
- versions:
|
||||
type: file
|
||||
description: File containing software versions
|
||||
pattern: "versions.yml"
|
||||
- bam:
|
||||
type: file
|
||||
description: Merged BAM/SAM file
|
||||
pattern: "*.{bam,sam}"
|
||||
|
||||
authors:
|
||||
- "@matthdsm"
|
|
@ -12,7 +12,7 @@ process GATK4_MARKDUPLICATES {
|
|||
|
||||
output:
|
||||
tuple val(meta), path("*.bam") , emit: bam
|
||||
tuple val(meta), path("*.bai") , emit: bai
|
||||
tuple val(meta), path("*.bai") , optional:true, emit: bai
|
||||
tuple val(meta), path("*.metrics"), emit: metrics
|
||||
path "versions.yml" , emit: versions
|
||||
|
||||
|
|
|
@ -43,4 +43,15 @@ process GATK4_MERGEBAMALIGNMENT {
|
|||
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
|
||||
stub:
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
"""
|
||||
touch ${prefix}.bam
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
}
|
||||
|
|
|
@ -57,4 +57,18 @@ process GATK4_MUTECT2 {
|
|||
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
|
||||
stub:
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
"""
|
||||
touch ${prefix}.vcf.gz
|
||||
touch ${prefix}.vcf.gz.tbi
|
||||
touch ${prefix}.vcf.gz.stats
|
||||
touch ${prefix}.f1r2.tar.gz
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
}
|
||||
|
|
|
@ -39,4 +39,15 @@ process GATK4_REVERTSAM {
|
|||
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
|
||||
stub:
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
"""
|
||||
touch ${prefix}.reverted.bam
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
}
|
||||
|
|
|
@ -40,4 +40,17 @@ process GATK4_SAMTOFASTQ {
|
|||
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
|
||||
stub:
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
"""
|
||||
touch ${prefix}.fastq.gz
|
||||
touch ${prefix}_1.fastq.gz
|
||||
touch ${prefix}_2.fastq.gz
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
}
|
||||
|
|
|
@ -23,7 +23,7 @@ process METAPHLAN3 {
|
|||
script:
|
||||
def args = task.ext.args ?: ''
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
def input_type = ("$input".endsWith(".fastq.gz")) ? "--input_type fastq" : ("$input".contains(".fasta")) ? "--input_type fasta" : ("$input".endsWith(".bowtie2out.txt")) ? "--input_type bowtie2out" : "--input_type sam"
|
||||
def input_type = ("$input".endsWith(".fastq.gz") || "$input".endsWith(".fq.gz")) ? "--input_type fastq" : ("$input".contains(".fasta")) ? "--input_type fasta" : ("$input".endsWith(".bowtie2out.txt")) ? "--input_type bowtie2out" : "--input_type sam"
|
||||
def input_data = ("$input_type".contains("fastq")) && !meta.single_end ? "${input[0]},${input[1]}" : "$input"
|
||||
def bowtie2_out = "$input_type" == "--input_type bowtie2out" || "$input_type" == "--input_type sam" ? '' : "--bowtie2out ${prefix}.bowtie2out.txt"
|
||||
|
||||
|
|
|
@ -27,8 +27,8 @@ process MINIMAP2_ALIGN {
|
|||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
def input_reads = meta.single_end ? "$reads" : "${reads[0]} ${reads[1]}"
|
||||
def bam_output = bam_format ? "-a | samtools sort | samtools view -@ ${task.cpus} -b -h -o ${prefix}.bam" : "-o ${prefix}.paf"
|
||||
def cigar_paf = cigar_paf_format && !sam_format ? "-c" : ''
|
||||
def set_cigar_bam = cigar_bam && sam_format ? "-L" : ''
|
||||
def cigar_paf = cigar_paf_format && !bam_format ? "-c" : ''
|
||||
def set_cigar_bam = cigar_bam && bam_format ? "-L" : ''
|
||||
"""
|
||||
minimap2 \\
|
||||
$args \\
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_ADDORREPLACEREADGROUPS {
|
|||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
@ -38,12 +38,12 @@ process PICARD_ADDORREPLACEREADGROUPS {
|
|||
-Xmx${avail_mem}g \\
|
||||
--INPUT ${bam} \\
|
||||
--OUTPUT ${prefix}.bam \\
|
||||
-ID ${ID} \\
|
||||
-LB ${LIBRARY} \\
|
||||
-PL ${PLATFORM} \\
|
||||
-PU ${BARCODE} \\
|
||||
-SM ${SAMPLE} \\
|
||||
-CREATE_INDEX true
|
||||
--RGID ${ID} \\
|
||||
--RGLB ${LIBRARY} \\
|
||||
--RGPL ${PLATFORM} \\
|
||||
--RGPU ${BARCODE} \\
|
||||
--RGSM ${SAMPLE} \\
|
||||
--CREATE_INDEX true
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_CLEANSAM {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
@ -31,8 +31,8 @@ process PICARD_CLEANSAM {
|
|||
-Xmx${avail_mem}g \\
|
||||
CleanSam \\
|
||||
${args} \\
|
||||
-I ${bam} \\
|
||||
-O ${prefix}.bam
|
||||
--INPUT ${bam} \\
|
||||
--OUTPUT ${prefix}.bam
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_COLLECTHSMETRICS {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
@ -38,10 +38,10 @@ process PICARD_COLLECTHSMETRICS {
|
|||
CollectHsMetrics \\
|
||||
$args \\
|
||||
$reference \\
|
||||
-BAIT_INTERVALS $bait_intervals \\
|
||||
-TARGET_INTERVALS $target_intervals \\
|
||||
-INPUT $bam \\
|
||||
-OUTPUT ${prefix}.CollectHsMetrics.coverage_metrics
|
||||
--BAIT_INTERVALS $bait_intervals \\
|
||||
--TARGET_INTERVALS $target_intervals \\
|
||||
--INPUT $bam \\
|
||||
--OUTPUT ${prefix}.CollectHsMetrics.coverage_metrics
|
||||
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_COLLECTMULTIPLEMETRICS {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
@ -33,9 +33,9 @@ process PICARD_COLLECTMULTIPLEMETRICS {
|
|||
-Xmx${avail_mem}g \\
|
||||
CollectMultipleMetrics \\
|
||||
$args \\
|
||||
INPUT=$bam \\
|
||||
OUTPUT=${prefix}.CollectMultipleMetrics \\
|
||||
REFERENCE_SEQUENCE=$fasta
|
||||
--INPUT $bam \\
|
||||
--OUTPUT ${prefix}.CollectMultipleMetrics \\
|
||||
--REFERENCE_SEQUENCE $fasta
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
|
|
|
@ -2,13 +2,13 @@ process PICARD_COLLECTWGSMETRICS {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam), path(bai)
|
||||
tuple val(meta), path(bam)
|
||||
path fasta
|
||||
|
||||
output:
|
||||
|
@ -32,9 +32,10 @@ process PICARD_COLLECTWGSMETRICS {
|
|||
-Xmx${avail_mem}g \\
|
||||
CollectWgsMetrics \\
|
||||
$args \\
|
||||
INPUT=$bam \\
|
||||
OUTPUT=${prefix}.CollectWgsMetrics.coverage_metrics \\
|
||||
REFERENCE_SEQUENCE=$fasta
|
||||
--INPUT $bam \\
|
||||
--OUTPUT ${prefix}.CollectWgsMetrics.coverage_metrics \\
|
||||
--REFERENCE_SEQUENCE $fasta
|
||||
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_CREATESEQUENCEDICTIONARY {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(fasta)
|
||||
|
@ -31,8 +31,8 @@ process PICARD_CREATESEQUENCEDICTIONARY {
|
|||
-Xmx${avail_mem}g \\
|
||||
CreateSequenceDictionary \\
|
||||
$args \\
|
||||
R=$fasta \\
|
||||
O=${prefix}.dict
|
||||
--REFERENCE $fasta \\
|
||||
--OUTPUT ${prefix}.dict
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_CROSSCHECKFINGERPRINTS {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(input1)
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_FILTERSAMREADS {
|
|||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam), path(readlist)
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_FIXMATEINFORMATION {
|
|||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
@ -31,8 +31,8 @@ process PICARD_FIXMATEINFORMATION {
|
|||
picard \\
|
||||
FixMateInformation \\
|
||||
-Xmx${avail_mem}g \\
|
||||
-I ${bam} \\
|
||||
-O ${prefix}.bam \\
|
||||
--INPUT ${bam} \\
|
||||
--OUTPUT ${prefix}.bam \\
|
||||
--VALIDATION_STRINGENCY ${STRINGENCY}
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_LIFTOVERVCF {
|
|||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(input_vcf)
|
||||
|
@ -35,11 +35,11 @@ process PICARD_LIFTOVERVCF {
|
|||
-Xmx${avail_mem}g \\
|
||||
LiftoverVcf \\
|
||||
$args \\
|
||||
I=$input_vcf \\
|
||||
O=${prefix}.lifted.vcf.gz \\
|
||||
CHAIN=$chain \\
|
||||
REJECT=${prefix}.unlifted.vcf.gz \\
|
||||
R=$fasta
|
||||
--INPUT $input_vcf \\
|
||||
--OUTPUT ${prefix}.lifted.vcf.gz \\
|
||||
--CHAIN $chain \\
|
||||
--REJECT ${prefix}.unlifted.vcf.gz \\
|
||||
--REFERENCE_SEQUENCE $fasta
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_MARKDUPLICATES {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
@ -33,9 +33,9 @@ process PICARD_MARKDUPLICATES {
|
|||
-Xmx${avail_mem}g \\
|
||||
MarkDuplicates \\
|
||||
$args \\
|
||||
I=$bam \\
|
||||
O=${prefix}.bam \\
|
||||
M=${prefix}.MarkDuplicates.metrics.txt
|
||||
--INPUT $bam \\
|
||||
--OUTPUT ${prefix}.bam \\
|
||||
--METRICS_FILE ${prefix}.MarkDuplicates.metrics.txt
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_MERGESAMFILES {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bams)
|
||||
|
@ -33,8 +33,8 @@ process PICARD_MERGESAMFILES {
|
|||
-Xmx${avail_mem}g \\
|
||||
MergeSamFiles \\
|
||||
$args \\
|
||||
${'INPUT='+bam_files.join(' INPUT=')} \\
|
||||
OUTPUT=${prefix}.bam
|
||||
${'--INPUT '+bam_files.join(' --INPUT ')} \\
|
||||
--OUTPUT ${prefix}.bam
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
picard: \$( echo \$(picard MergeSamFiles --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_SORTSAM {
|
|||
tag "$meta.id"
|
||||
label 'process_low'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam)
|
||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_SORTVCF {
|
|||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
||||
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
||||
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(vcf)
|
||||
|
@ -22,8 +22,8 @@ process PICARD_SORTVCF {
|
|||
script:
|
||||
def args = task.ext.args ?: ''
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
def seq_dict = sequence_dict ? "-SEQUENCE_DICTIONARY $sequence_dict" : ""
|
||||
def reference = reference ? "-REFERENCE_SEQUENCE $reference" : ""
|
||||
def seq_dict = sequence_dict ? "--SEQUENCE_DICTIONARY $sequence_dict" : ""
|
||||
def reference = reference ? "--REFERENCE_SEQUENCE $reference" : ""
|
||||
def avail_mem = 3
|
||||
if (!task.memory) {
|
||||
log.info '[Picard SortVcf] Available memory not known - defaulting to 3GB. Specify process memory requirements to change this.'
|
||||
|
|
|
@ -41,4 +41,16 @@ process SAMTOOLS_VIEW {
|
|||
samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
|
||||
stub:
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
"""
|
||||
touch ${prefix}.bam
|
||||
touch ${prefix}.cram
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
|
||||
END_VERSIONS
|
||||
"""
|
||||
}
|
||||
|
|
47
modules/srst2/srst2/main.nf
Normal file
47
modules/srst2/srst2/main.nf
Normal file
|
@ -0,0 +1,47 @@
|
|||
process SRST2_SRST2 {
|
||||
tag "${meta.id}"
|
||||
label 'process_low'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::srst2=0.2.0" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/srst2%3A0.2.0--py27_2':
|
||||
'quay.io/biocontainers/srst2:0.2.0--py27_2'}"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(fastq_s), path(db)
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*_genes_*_results.txt") , optional:true, emit: gene_results
|
||||
tuple val(meta), path("*_fullgenes_*_results.txt") , optional:true, emit: fullgene_results
|
||||
tuple val(meta), path("*_mlst_*_results.txt") , optional:true, emit: mlst_results
|
||||
tuple val(meta), path("*.pileup") , emit: pileup
|
||||
tuple val(meta), path("*.sorted.bam") , emit: sorted_bam
|
||||
path "versions.yml" , emit: versions
|
||||
|
||||
when:
|
||||
task.ext.when == null || task.ext.when
|
||||
|
||||
script:
|
||||
def args = task.ext.args ?: ""
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
def read_s = meta.single_end ? "--input_se ${fastq_s}" : "--input_pe ${fastq_s[0]} ${fastq_s[1]}"
|
||||
if (meta.db=="gene") {
|
||||
database = "--gene_db ${db}"
|
||||
} else if (meta.db=="mlst") {
|
||||
database = "--mlst_db ${db}"
|
||||
} else {
|
||||
error "Please set meta.db to either \"gene\" or \"mlst\""
|
||||
}
|
||||
"""
|
||||
srst2 \\
|
||||
${read_s} \\
|
||||
--threads $task.cpus \\
|
||||
--output ${prefix} \\
|
||||
${database} \\
|
||||
$args
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
srst2: \$(echo \$(srst2 --version 2>&1) | sed 's/srst2 //' ))
|
||||
END_VERSIONS
|
||||
"""
|
||||
}
|
72
modules/srst2/srst2/meta.yml
Normal file
72
modules/srst2/srst2/meta.yml
Normal file
|
@ -0,0 +1,72 @@
|
|||
name: srst2_srst2
|
||||
description: |
|
||||
Short Read Sequence Typing for Bacterial Pathogens is a program designed to take Illumina sequence data,
|
||||
a MLST database and/or a database of gene sequences (e.g. resistance genes, virulence genes, etc)
|
||||
and report the presence of STs and/or reference genes.
|
||||
keywords:
|
||||
- mlst
|
||||
- typing
|
||||
- illumina
|
||||
tools:
|
||||
- srst2:
|
||||
description: "Short Read Sequence Typing for Bacterial Pathogens"
|
||||
homepage: "http://katholt.github.io/srst2/"
|
||||
documentation: "https://github.com/katholt/srst2/blob/master/README.md"
|
||||
tool_dev_url: "https://github.com/katholt/srst2"
|
||||
doi: "10.1186/s13073-014-0090-6"
|
||||
licence: ["BSD"]
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map0.2.0-4
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
id: should be the identification number or sample name
|
||||
single_end: should be true for single end data and false for paired in data
|
||||
db: should be either 'gene' to use the --gene_db option or "mlst" to use the --mlst_db option
|
||||
e.g. [ id:'sample', single_end:false , db:'gene']
|
||||
- fasta:
|
||||
type: file
|
||||
description: |
|
||||
gzipped fasta file. If files are NOT in
|
||||
MiSeq format sample_S1_L001_R1_001.fastq.gz uses --forward and --reverse parameters; otherwise
|
||||
default is _1, i.e. expect forward reads as sample_1.fastq.gz).
|
||||
pattern: "*.fastq.gz"
|
||||
- db:
|
||||
type: file
|
||||
description: Database in FASTA format
|
||||
pattern: "*.fasta"
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'sample', single_end:false ]
|
||||
- versions:
|
||||
type: file
|
||||
description: File containing software versions
|
||||
pattern: "versions.yml"
|
||||
- txt:
|
||||
type: file
|
||||
description: A detailed report, with one row per gene per sample described here github.com/katholt/srst2#gene-typing
|
||||
pattern: "*_fullgenes_*_results.txt"
|
||||
- txt:
|
||||
type: file
|
||||
description: A tabulated summary report of samples x genes.
|
||||
pattern: "*_genes_*_results.txt"
|
||||
- txt:
|
||||
type: file
|
||||
description: A tabulated summary report of mlst subtyping.
|
||||
pattern: "*_mlst_*_results.txt"
|
||||
- bam:
|
||||
type: file
|
||||
description: Sorted BAM file
|
||||
pattern: "*.sorted.bam"
|
||||
- pileup:
|
||||
type: file
|
||||
description: SAMtools pileup file
|
||||
pattern: "*.pileup"
|
||||
|
||||
authors:
|
||||
- "@jvhagey"
|
50
modules/vardictjava/main.nf
Normal file
50
modules/vardictjava/main.nf
Normal file
|
@ -0,0 +1,50 @@
|
|||
def VERSION = '1.8.3'
|
||||
|
||||
process VARDICTJAVA {
|
||||
tag "$meta.id"
|
||||
label 'process_medium'
|
||||
|
||||
conda (params.enable_conda ? "bioconda::vardict-java=1.8.3" : null)
|
||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||
'https://depot.galaxyproject.org/singularity/vardict-java:1.8.3--hdfd78af_0':
|
||||
'quay.io/biocontainers/vardict-java:1.8.3--hdfd78af_0' }"
|
||||
|
||||
input:
|
||||
tuple val(meta), path(bam), path(bai)
|
||||
path(bed)
|
||||
tuple path(fasta), path(fasta_fai)
|
||||
|
||||
output:
|
||||
tuple val(meta), path("*.vcf.gz"), emit: vcf
|
||||
path "versions.yml" , emit: versions
|
||||
|
||||
when:
|
||||
task.ext.when == null || task.ext.when
|
||||
|
||||
script:
|
||||
def args = task.ext.args ?: ''
|
||||
def args2 = task.ext.args2 ?: ''
|
||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||
|
||||
"""
|
||||
vardict-java \\
|
||||
$args \\
|
||||
-c 1 -S 2 -E 3 \\
|
||||
-b $bam \\
|
||||
-th $task.cpus \\
|
||||
-N $prefix \\
|
||||
-G $fasta \\
|
||||
$bed \\
|
||||
| teststrandbias.R \\
|
||||
| var2vcf_valid.pl \\
|
||||
$args2 \\
|
||||
-N $prefix \\
|
||||
| gzip -c > ${prefix}.vcf.gz
|
||||
|
||||
cat <<-END_VERSIONS > versions.yml
|
||||
"${task.process}":
|
||||
vardict-java: $VERSION
|
||||
var2vcf_valid.pl: \$(echo \$(var2vcf_valid.pl -h | sed -n 2p | awk '{ print \$2 }'))
|
||||
END_VERSIONS
|
||||
"""
|
||||
}
|
60
modules/vardictjava/meta.yml
Normal file
60
modules/vardictjava/meta.yml
Normal file
|
@ -0,0 +1,60 @@
|
|||
name: "vardictjava"
|
||||
|
||||
description: The Java port of the VarDict variant caller
|
||||
keywords:
|
||||
- variant calling
|
||||
- VarDict
|
||||
- AstraZeneca
|
||||
tools:
|
||||
- "vardictjava":
|
||||
description: "Java port of the VarDict variant discovery program"
|
||||
homepage: "https://github.com/AstraZeneca-NGS/VarDictJava"
|
||||
documentation: "https://github.com/AstraZeneca-NGS/VarDictJava"
|
||||
tool_dev_url: "https://github.com/AstraZeneca-NGS/VarDictJava"
|
||||
doi: "10.1093/nar/gkw227 "
|
||||
licence: "['MIT']"
|
||||
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: BAM/SAM file
|
||||
pattern: "*.{bam,sam}"
|
||||
- bai:
|
||||
type: file
|
||||
description: Index of the BAM file
|
||||
pattern: "*.bai"
|
||||
- fasta:
|
||||
type: file
|
||||
description: FASTA of the reference genome
|
||||
pattern: "*.{fa,fasta}"
|
||||
- fasta_fai:
|
||||
type: file
|
||||
description: The index of the FASTA of the reference genome
|
||||
pattern: "*.fai"
|
||||
- bed:
|
||||
type: file
|
||||
description: BED with the regions of interest
|
||||
pattern: "*.bed"
|
||||
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- versions:
|
||||
type: file
|
||||
description: File containing software versions
|
||||
pattern: "versions.yml"
|
||||
- vcf:
|
||||
type: file
|
||||
description: VCF file output
|
||||
pattern: "*.vcf.gz"
|
||||
|
||||
authors:
|
||||
- "@nvnieuwk"
|
41
subworkflows/nf-core/bam_qc_picard/main.nf
Normal file
41
subworkflows/nf-core/bam_qc_picard/main.nf
Normal file
|
@ -0,0 +1,41 @@
|
|||
//
|
||||
// Run QC steps on BAM/CRAM files using Picard
|
||||
//
|
||||
|
||||
include { PICARD_COLLECTMULTIPLEMETRICS } from '../../../modules/picard/collectmultiplemetrics/main'
|
||||
include { PICARD_COLLECTWGSMETRICS } from '../../../modules/picard/collectwgsmetrics/main'
|
||||
include { PICARD_COLLECTHSMETRICS } from '../../../modules/picard/collecthsmetrics/main'
|
||||
|
||||
workflow BAM_QC_PICARD {
|
||||
take:
|
||||
ch_bam // channel: [ val(meta), [ bam ]]
|
||||
ch_fasta // channel: [ fasta ]
|
||||
ch_fasta_fai // channel: [ fasta_fai ]
|
||||
ch_bait_interval // channel: [ bait_interval ]
|
||||
ch_target_interval // channel: [ target_interval ]
|
||||
|
||||
main:
|
||||
ch_versions = Channel.empty()
|
||||
ch_coverage_metrics = Channel.empty()
|
||||
|
||||
PICARD_COLLECTMULTIPLEMETRICS( ch_bam, ch_fasta )
|
||||
ch_versions = ch_versions.mix(PICARD_COLLECTMULTIPLEMETRICS.out.versions.first())
|
||||
|
||||
if (ch_bait_interval || ch_target_interval) {
|
||||
if (!ch_bait_interval) log.error("Bait interval channel is empty")
|
||||
if (!ch_target_interval) log.error("Target interval channel is empty")
|
||||
PICARD_COLLECTHSMETRICS( ch_bam, ch_fasta, ch_fasta_fai, ch_bait_interval, ch_target_interval )
|
||||
ch_coverage_metrics = ch_coverage_metrics.mix(PICARD_COLLECTHSMETRICS.out.metrics)
|
||||
ch_versions = ch_versions.mix(PICARD_COLLECTHSMETRICS.out.versions.first())
|
||||
} else {
|
||||
PICARD_COLLECTWGSMETRICS( ch_bam, ch_fasta )
|
||||
ch_versions = ch_versions.mix(PICARD_COLLECTWGSMETRICS.out.versions.first())
|
||||
ch_coverage_metrics = ch_coverage_metrics.mix(PICARD_COLLECTWGSMETRICS.out.metrics)
|
||||
}
|
||||
|
||||
emit:
|
||||
coverage_metrics = ch_coverage_metrics // channel: [ val(meta), [ coverage_metrics ] ]
|
||||
multiple_metrics = PICARD_COLLECTMULTIPLEMETRICS.out.metrics // channel: [ val(meta), [ multiple_metrics ] ]
|
||||
|
||||
versions = ch_versions // channel: [ versions.yml ]
|
||||
}
|
60
subworkflows/nf-core/bam_qc_picard/meta.yml
Normal file
60
subworkflows/nf-core/bam_qc_picard/meta.yml
Normal file
|
@ -0,0 +1,60 @@
|
|||
name: bam_qc
|
||||
description: Produces comprehensive statistics from BAM file
|
||||
keywords:
|
||||
- statistics
|
||||
- counts
|
||||
- hs_metrics
|
||||
- wgs_metrics
|
||||
- bam
|
||||
- sam
|
||||
- cram
|
||||
modules:
|
||||
- picard/collectmultiplemetrics
|
||||
- picard/collectwgsmetrics
|
||||
- picard/collecthsmetrics
|
||||
input:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- bam:
|
||||
type: file
|
||||
description: BAM/CRAM/SAM file
|
||||
pattern: "*.{bam,cram,sam}"
|
||||
- fasta:
|
||||
type: optional file
|
||||
description: Reference fasta file
|
||||
pattern: "*.{fasta,fa}"
|
||||
- fasta_fai:
|
||||
type: optional file
|
||||
description: Reference fasta file index
|
||||
pattern: "*.{fasta,fa}.fai"
|
||||
- bait_intervals:
|
||||
type: optional file
|
||||
description: An interval list file that contains the locations of the baits used.
|
||||
pattern: "baits.interval_list"
|
||||
- target_intervals:
|
||||
type: optional file
|
||||
description: An interval list file that contains the locations of the targets.
|
||||
pattern: "targets.interval_list"
|
||||
output:
|
||||
- meta:
|
||||
type: map
|
||||
description: |
|
||||
Groovy Map containing sample information
|
||||
e.g. [ id:'test', single_end:false ]
|
||||
- coverage_metrics:
|
||||
type: file
|
||||
description: Alignment metrics files generated by picard CollectHsMetrics or CollectWgsMetrics
|
||||
pattern: "*_metrics.txt"
|
||||
- multiple_metrics:
|
||||
type: file
|
||||
description: Alignment metrics files generated by picard CollectMultipleMetrics
|
||||
pattern: "*_{metrics}"
|
||||
- versions:
|
||||
type: file
|
||||
description: File containing software versions
|
||||
pattern: "versions.yml"
|
||||
authors:
|
||||
- "@matthdsm"
|
|
@ -603,6 +603,10 @@ elprep/filter:
|
|||
- modules/elprep/filter/**
|
||||
- tests/modules/elprep/filter/**
|
||||
|
||||
elprep/merge:
|
||||
- modules/elprep/merge/**
|
||||
- tests/modules/elprep/merge/**
|
||||
|
||||
elprep/split:
|
||||
- modules/elprep/split/**
|
||||
- tests/modules/elprep/split/**
|
||||
|
@ -1775,6 +1779,10 @@ sratools/prefetch:
|
|||
- modules/sratools/prefetch/**
|
||||
- tests/modules/sratools/prefetch/**
|
||||
|
||||
srst2/srst2:
|
||||
- modules/srst2/srst2/**
|
||||
- tests/modules/srst2/srst2/**
|
||||
|
||||
ssuissero:
|
||||
- modules/ssuissero/**
|
||||
- tests/modules/ssuissero/**
|
||||
|
@ -1916,6 +1924,10 @@ unzip:
|
|||
- modules/unzip/**
|
||||
- tests/modules/unzip/**
|
||||
|
||||
vardictjava:
|
||||
- modules/vardictjava/**
|
||||
- tests/modules/vardictjava/**
|
||||
|
||||
variantbam:
|
||||
- modules/variantbam/**
|
||||
- tests/modules/variantbam/**
|
||||
|
|
|
@ -14,6 +14,7 @@ params {
|
|||
genome_paf = "${test_data_dir}/genomics/sarscov2/genome/genome.paf"
|
||||
genome_sizes = "${test_data_dir}/genomics/sarscov2/genome/genome.sizes"
|
||||
transcriptome_fasta = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.fasta"
|
||||
proteome_fasta = "${test_data_dir}/genomics/sarscov2/genome/proteome.fasta"
|
||||
transcriptome_paf = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.paf"
|
||||
|
||||
test_bed = "${test_data_dir}/genomics/sarscov2/genome/bed/test.bed"
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
- name: antismash antismashlitedownloaddatabases test_antismash_antismashlitedownloaddatabases
|
||||
command: nextflow run tests/modules/antismash/antismashlitedownloaddatabases -entry test_antismash_antismashlitedownloaddatabases -c tests/config/nextflow.config
|
||||
tags:
|
||||
- antismash
|
||||
- antismash/antismashlitedownloaddatabases
|
||||
- antismash
|
||||
files:
|
||||
- path: output/antismash/versions.yml
|
||||
md5sum: 24859c67023abab99de295d3675a24b6
|
||||
|
@ -12,6 +12,5 @@
|
|||
- path: output/antismash/antismash_db/pfam
|
||||
- path: output/antismash/antismash_db/resfam
|
||||
- path: output/antismash/antismash_db/tigrfam
|
||||
- path: output/antismash/css
|
||||
- path: output/antismash/detection
|
||||
- path: output/antismash/modules
|
||||
- path: output/antismash/antismash_dir
|
||||
- path: output/antismash/antismash_dir/detection/hmm_detection/data/bgc_seeds.hmm
|
||||
|
|
|
@ -2,13 +2,29 @@
|
|||
|
||||
nextflow.enable.dsl = 2
|
||||
|
||||
include { BAMTOOLS_SPLIT } from '../../../../modules/bamtools/split/main.nf'
|
||||
include { BAMTOOLS_SPLIT as BAMTOOLS_SPLIT_SINGLE } from '../../../../modules/bamtools/split/main.nf'
|
||||
include { BAMTOOLS_SPLIT as BAMTOOLS_SPLIT_MULTIPLE } from '../../../../modules/bamtools/split/main.nf'
|
||||
|
||||
workflow test_bamtools_split {
|
||||
workflow test_bamtools_split_single_input {
|
||||
|
||||
input = [
|
||||
[ id:'test', single_end:false ], // meta map
|
||||
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true) ]
|
||||
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
|
||||
]
|
||||
|
||||
BAMTOOLS_SPLIT ( input )
|
||||
BAMTOOLS_SPLIT_SINGLE ( input )
|
||||
}
|
||||
|
||||
workflow test_bamtools_split_multiple {
|
||||
|
||||
input = [
|
||||
[ id:'test', single_end:false ], // meta map
|
||||
[
|
||||
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
|
||||
file(params.test_data['homo_sapiens']['illumina']['test2_paired_end_sorted_bam'], checkIfExists: true)
|
||||
]
|
||||
]
|
||||
|
||||
BAMTOOLS_SPLIT_MULTIPLE ( input )
|
||||
}
|
||||
|
||||
|
|
|
@ -1,10 +1,23 @@
|
|||
- name: bamtools split test_bamtools_split
|
||||
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
|
||||
- name: bamtools split test_bamtools_split_single_input
|
||||
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split_single_input -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
|
||||
tags:
|
||||
- bamtools/split
|
||||
- bamtools
|
||||
- bamtools/split
|
||||
files:
|
||||
- path: output/bamtools/test.paired_end.sorted.REF_chr22.bam
|
||||
- path: output/bamtools/test.REF_chr22.bam
|
||||
md5sum: b7dc50e0edf9c6bfc2e3b0e6d074dc07
|
||||
- path: output/bamtools/test.paired_end.sorted.REF_unmapped.bam
|
||||
- path: output/bamtools/test.REF_unmapped.bam
|
||||
md5sum: e0754bf72c51543b2d745d96537035fb
|
||||
- path: output/bamtools/versions.yml
|
||||
|
||||
- name: bamtools split test_bamtools_split_multiple
|
||||
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split_multiple -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
|
||||
tags:
|
||||
- bamtools
|
||||
- bamtools/split
|
||||
files:
|
||||
- path: output/bamtools/test.REF_chr22.bam
|
||||
md5sum: 585675bea34c48ebe9db06a561d4b4fa
|
||||
- path: output/bamtools/test.REF_unmapped.bam
|
||||
md5sum: 16ad644c87b9471f3026bc87c98b4963
|
||||
- path: output/bamtools/versions.yml
|
||||
|
|
|
@ -7,9 +7,22 @@ include { DIAMOND_BLASTP } from '../../../../modules/diamond/blastp/main.nf'
|
|||
|
||||
workflow test_diamond_blastp {
|
||||
|
||||
db = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
|
||||
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
|
||||
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||
fasta = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||
out_ext = 'txt'
|
||||
blast_columns = 'qseqid qlen'
|
||||
|
||||
DIAMOND_MAKEDB ( db )
|
||||
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db )
|
||||
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
|
||||
}
|
||||
|
||||
workflow test_diamond_blastp_daa {
|
||||
|
||||
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||
fasta = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||
out_ext = 'daa'
|
||||
blast_columns = []
|
||||
|
||||
DIAMOND_MAKEDB ( db )
|
||||
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
|
||||
}
|
||||
|
|
|
@ -1,8 +1,17 @@
|
|||
- name: diamond blastp
|
||||
command: nextflow run ./tests/modules/diamond/blastp -entry test_diamond_blastp -c ./tests/config/nextflow.config -c ./tests/modules/diamond/blastp/nextflow.config
|
||||
- name: diamond blastp test_diamond_blastp
|
||||
command: nextflow run tests/modules/diamond/blastp -entry test_diamond_blastp -c tests/config/nextflow.config
|
||||
tags:
|
||||
- diamond
|
||||
- diamond/blastp
|
||||
- diamond
|
||||
files:
|
||||
- path: ./output/diamond/test.diamond_blastp.txt
|
||||
md5sum: 3ca7f6290c1d8741c573370e6f8b4db0
|
||||
- path: output/diamond/test.diamond_blastp.txt
|
||||
- path: output/diamond/versions.yml
|
||||
|
||||
- name: diamond blastp test_diamond_blastp_daa
|
||||
command: nextflow run tests/modules/diamond/blastp -entry test_diamond_blastp_daa -c tests/config/nextflow.config
|
||||
tags:
|
||||
- diamond/blastp
|
||||
- diamond
|
||||
files:
|
||||
- path: output/diamond/test.diamond_blastp.daa
|
||||
- path: output/diamond/versions.yml
|
||||
|
|
|
@ -7,9 +7,22 @@ include { DIAMOND_BLASTX } from '../../../../modules/diamond/blastx/main.nf'
|
|||
|
||||
workflow test_diamond_blastx {
|
||||
|
||||
db = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
|
||||
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
|
||||
out_ext = 'tfdfdt' // Nonsense file extension to check default case.
|
||||
blast_columns = 'qseqid qlen'
|
||||
|
||||
DIAMOND_MAKEDB ( db )
|
||||
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db )
|
||||
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
|
||||
}
|
||||
|
||||
workflow test_diamond_blastx_daa {
|
||||
|
||||
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
|
||||
out_ext = 'daa'
|
||||
blast_columns = []
|
||||
|
||||
DIAMOND_MAKEDB ( db )
|
||||
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
|
||||
}
|
||||
|
|
|
@ -1,8 +1,18 @@
|
|||
- name: diamond blastx
|
||||
command: nextflow run ./tests/modules/diamond/blastx -entry test_diamond_blastx -c ./tests/config/nextflow.config -c ./tests/modules/diamond/blastx/nextflow.config
|
||||
- name: diamond blastx test_diamond_blastx
|
||||
command: nextflow run tests/modules/diamond/blastx -entry test_diamond_blastx -c tests/config/nextflow.config
|
||||
tags:
|
||||
- diamond
|
||||
- diamond/blastx
|
||||
files:
|
||||
- path: ./output/diamond/test.diamond_blastx.txt
|
||||
md5sum: d41d8cd98f00b204e9800998ecf8427e
|
||||
- path: output/diamond/test.diamond_blastx.txt
|
||||
- path: output/diamond/versions.yml
|
||||
|
||||
- name: diamond blastx test_diamond_blastx_daa
|
||||
command: nextflow run tests/modules/diamond/blastx -entry test_diamond_blastx_daa -c tests/config/nextflow.config
|
||||
tags:
|
||||
- diamond
|
||||
- diamond/blastx
|
||||
files:
|
||||
- path: output/diamond/test.diamond_blastx.daa
|
||||
md5sum: 0df4a833408416f32981415873facc11
|
||||
- path: output/diamond/versions.yml
|
||||
|
|
|
@ -6,7 +6,7 @@ include { DIAMOND_MAKEDB } from '../../../../modules/diamond/makedb/main.nf'
|
|||
|
||||
workflow test_diamond_makedb {
|
||||
|
||||
input = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
|
||||
input = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||
|
||||
DIAMOND_MAKEDB ( input )
|
||||
}
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
- name: diamond makedb test_diamond_makedb
|
||||
command: nextflow run ./tests/modules/diamond/makedb -entry test_diamond_makedb -c ./tests/config/nextflow.config -c ./tests/modules/diamond/makedb/nextflow.config
|
||||
command: nextflow run tests/modules/diamond/makedb -entry test_diamond_makedb -c tests/config/nextflow.config
|
||||
tags:
|
||||
- diamond
|
||||
- diamond/makedb
|
||||
- diamond
|
||||
files:
|
||||
- path: output/diamond/genome.fasta.dmnd
|
||||
md5sum: 2447fb376394c20d43ea3aad2aa5d15d
|
||||
- path: output/diamond/proteome.fasta.dmnd
|
||||
md5sum: fc28c50b202dd7a7c5451cddff2ba1f4
|
||||
- path: output/diamond/versions.yml
|
||||
|
|
17
tests/modules/elprep/merge/main.nf
Normal file
17
tests/modules/elprep/merge/main.nf
Normal file
|
@ -0,0 +1,17 @@
|
|||
#!/usr/bin/env nextflow
|
||||
|
||||
nextflow.enable.dsl = 2
|
||||
|
||||
include { ELPREP_SPLIT } from '../../../../modules/elprep/split/main.nf'
|
||||
include { ELPREP_MERGE } from '../../../../modules/elprep/merge/main.nf'
|
||||
|
||||
workflow test_elprep_merge {
|
||||
|
||||
input = [
|
||||
[ id:'test', single_end:false ], // meta map
|
||||
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
|
||||
]
|
||||
|
||||
ELPREP_SPLIT ( input )
|
||||
ELPREP_MERGE ( ELPREP_SPLIT.out.bam )
|
||||
}
|
5
tests/modules/elprep/merge/nextflow.config
Normal file
5
tests/modules/elprep/merge/nextflow.config
Normal file
|
@ -0,0 +1,5 @@
|
|||
process {
|
||||
withName : ELPREP_MERGE {
|
||||
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
||||
}
|
||||
}
|
8
tests/modules/elprep/merge/test.yml
Normal file
8
tests/modules/elprep/merge/test.yml
Normal file
|
@ -0,0 +1,8 @@
|
|||
- name: elprep merge test_elprep_merge
|
||||
command: nextflow run tests/modules/elprep/merge -entry test_elprep_merge -c tests/config/nextflow.config
|
||||
tags:
|
||||
- elprep
|
||||
- elprep/merge
|
||||
files:
|
||||
- path: output/elprep/output/test.bam
|
||||
- path: output/elprep/versions.yml
|
|
@ -14,3 +14,14 @@ workflow test_gatk4_mergebamalignment {
|
|||
|
||||
GATK4_MERGEBAMALIGNMENT ( input, fasta, dict )
|
||||
}
|
||||
|
||||
workflow test_gatk4_mergebamalignment_stubs {
|
||||
input = [ [ id:'test' ], // meta map
|
||||
"test_foo.bam",
|
||||
"test_bar.bam"
|
||||
]
|
||||
fasta = "genome.fasta"
|
||||
dict = "genome.fasta.dict"
|
||||
|
||||
GATK4_MERGEBAMALIGNMENT ( input, fasta, dict )
|
||||
}
|
||||
|
|
|
@ -7,3 +7,12 @@
|
|||
- path: output/gatk4/test.bam
|
||||
md5sum: e6f1b343700b7ccb94e81ae127433988
|
||||
- path: output/gatk4/versions.yml
|
||||
|
||||
- name: gatk4 mergebamalignment test_gatk4_mergebamalignment_stubs
|
||||
command: nextflow run ./tests/modules/gatk4/mergebamalignment -entry test_gatk4_mergebamalignment -c ./tests/config/nextflow.config -c ./tests/modules/gatk4/mergebamalignment/nextflow.config -stub-run
|
||||
tags:
|
||||
- gatk4
|
||||
- gatk4/mergebamalignment
|
||||
files:
|
||||
- path: output/gatk4/test.bam
|
||||
- path: output/gatk4/versions.yml
|
||||
|
|
|
@ -118,3 +118,25 @@ workflow test_gatk4_mutect2_mitochondria {
|
|||
|
||||
GATK4_MUTECT2_MITO ( input, fasta, fai, dict, [], [], [], [] )
|
||||
}
|
||||
|
||||
workflow test_gatk4_mutect2_tumor_normal_pair_f1r2_stubs {
|
||||
input = [ [ id:'test', normal_id:'normal', tumor_id:'tumour' ], // meta map
|
||||
[ "foo_paired.bam",
|
||||
"foo_paired2.bam"
|
||||
],
|
||||
[ "foo_paired.bam.bai",
|
||||
"foo_paired2.bam.bai"
|
||||
],
|
||||
[]
|
||||
]
|
||||
|
||||
fasta = "genome.fasta"
|
||||
fai = "genome.fasta.fai"
|
||||
dict = "genome.fasta.dict"
|
||||
germline_resource = "genome_gnomAD.r2.1.1.vcf.gz"
|
||||
germline_resource_tbi = "genome_gnomAD.r2.1.1.vcf.gz.tbi"
|
||||
panel_of_normals = "genome_mills_and_1000G.indels.hg38.vcf.gz"
|
||||
panel_of_normals_tbi = "genome_mills_and_1000G.indels.hg38.vcf.gz.tbi"
|
||||
|
||||
GATK4_MUTECT2_F1R2 ( input, fasta, fai, dict, germline_resource, germline_resource_tbi, panel_of_normals, panel_of_normals_tbi )
|
||||
}
|
||||
|
|
|
@ -69,3 +69,15 @@
|
|||
md5sum: fc6ea14ca2da346babe78161beea28c9
|
||||
- path: output/gatk4/test.vcf.gz.tbi
|
||||
- path: output/gatk4/versions.yml
|
||||
|
||||
- name: gatk4 mutect2 test_gatk4_mutect2_tumor_normal_pair_f1r2_stubs
|
||||
command: nextflow run ./tests/modules/gatk4/mutect2 -entry test_gatk4_mutect2_tumor_normal_pair_f1r2 -c ./tests/config/nextflow.config -c ./tests/modules/gatk4/mutect2/nextflow.config -stub-run
|
||||
tags:
|
||||
- gatk4
|
||||
- gatk4/mutect2
|
||||
files:
|
||||
- path: output/gatk4/test.f1r2.tar.gz
|
||||
- path: output/gatk4/test.vcf.gz
|
||||
- path: output/gatk4/test.vcf.gz.stats
|
||||
- path: output/gatk4/test.vcf.gz.tbi
|
||||
- path: output/gatk4/versions.yml
|
||||
|
|
|
@ -11,3 +11,11 @@ workflow test_gatk4_revertsam {
|
|||
|
||||
GATK4_REVERTSAM ( input )
|
||||
}
|
||||
|
||||
workflow test_gatk4_revertsam_stubs {
|
||||
input = [ [ id:'test' ], // meta map
|
||||
"foo_paired_end.bam"
|
||||
]
|
||||
|
||||
GATK4_REVERTSAM ( input )
|
||||
}
|
||||
|
|
|
@ -7,3 +7,12 @@
|
|||
- path: output/gatk4/test.reverted.bam
|
||||
md5sum: f783a88deb45c3a2c20ca12cbe1c5652
|
||||
- path: output/gatk4/versions.yml
|
||||
|
||||
- name: gatk4 revertsam test_gatk4_revertsam_stubs
|
||||
command: nextflow run ./tests/modules/gatk4/revertsam -entry test_gatk4_revertsam -c ./tests/config/nextflow.config -c ./tests/modules/gatk4/revertsam/nextflow.config -stub-run
|
||||
tags:
|
||||
- gatk4
|
||||
- gatk4/revertsam
|
||||
files:
|
||||
- path: output/gatk4/test.reverted.bam
|
||||
- path: output/gatk4/versions.yml
|
||||
|
|
|
@ -19,3 +19,11 @@ workflow test_gatk4_samtofastq_paired_end {
|
|||
|
||||
GATK4_SAMTOFASTQ ( input )
|
||||
}
|
||||
|
||||
workflow test_gatk4_samtofastq_paired_end_stubs {
|
||||
input = [ [ id:'test', single_end: false ], // meta map
|
||||
[ "foo_paired_end.bam" ]
|
||||
]
|
||||
|
||||
GATK4_SAMTOFASTQ ( input )
|
||||
}
|
||||
|
|
|
@ -19,3 +19,13 @@
|
|||
- path: output/gatk4/test_2.fastq.gz
|
||||
md5sum: 613bf64c023609e1c62ad6ce9e4be8d7
|
||||
- path: output/gatk4/versions.yml
|
||||
|
||||
- name: gatk4 samtofastq test_gatk4_samtofastq_paired_end_stubs
|
||||
command: nextflow run ./tests/modules/gatk4/samtofastq -entry test_gatk4_samtofastq_paired_end -c ./tests/config/nextflow.config -c ./tests/modules/gatk4/samtofastq/nextflow.config -stub-run
|
||||
tags:
|
||||
- gatk4
|
||||
- gatk4/samtofastq
|
||||
files:
|
||||
- path: output/gatk4/test_1.fastq.gz
|
||||
- path: output/gatk4/test_2.fastq.gz
|
||||
- path: output/gatk4/versions.yml
|
||||
|
|
|
@ -7,4 +7,3 @@
|
|||
- path: output/picard/test.bam
|
||||
md5sum: 7b82f3461c2d80fc6a10385e78c9427f
|
||||
- path: output/picard/versions.yml
|
||||
md5sum: 8a2d176295e1343146ea433c79bb517f
|
||||
|
|
|
@ -7,4 +7,3 @@
|
|||
- path: output/picard/test.bam
|
||||
md5sum: a48f8e77a1480445efc57570c3a38a68
|
||||
- path: output/picard/versions.yml
|
||||
md5sum: e6457d7c6de51bf6f4b577eda65e57ac
|
||||
|
|
|
@ -6,8 +6,7 @@ include { PICARD_COLLECTWGSMETRICS } from '../../../../modules/picard/collectwgs
|
|||
|
||||
workflow test_picard_collectwgsmetrics {
|
||||
input = [ [ id:'test', single_end:false ], // meta map
|
||||
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
|
||||
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam_bai'], checkIfExists: true)
|
||||
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
|
||||
]
|
||||
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
|
||||
|
||||
|
|
|
@ -7,4 +7,3 @@
|
|||
- path: output/picard/test.dict
|
||||
contains: ["SN:MT192765.1"]
|
||||
- path: output/picard/versions.yml
|
||||
md5sum: b3d8c7ea65b8a6d3237b153d13fe2014
|
||||
|
|
|
@ -7,4 +7,3 @@
|
|||
- path: output/picard/test.bam
|
||||
md5sum: 746102e8c242c0ef42e045c49d320030
|
||||
- path: output/picard/versions.yml
|
||||
md5sum: 4329ba7cdca8f4f6018dfd5c019ba2eb
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
process {
|
||||
ext.args = "WARN_ON_MISSING_CONTIG=true"
|
||||
ext.args = "--WARN_ON_MISSING_CONTIG true"
|
||||
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
||||
|
||||
}
|
||||
|
|
|
@ -3,7 +3,7 @@ process {
|
|||
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
||||
|
||||
withName: PICARD_MARKDUPLICATES_UNSORTED {
|
||||
ext.args = 'ASSUME_SORT_ORDER=queryname'
|
||||
ext.args = '--ASSUME_SORT_ORDER queryname'
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -22,3 +22,12 @@ workflow test_samtools_view_cram {
|
|||
|
||||
SAMTOOLS_VIEW ( input, fasta )
|
||||
}
|
||||
|
||||
workflow test_samtools_view_stubs {
|
||||
input = [ [ id:'test', single_end:false ], // meta map
|
||||
"foo_paired_end.bam",
|
||||
[]
|
||||
]
|
||||
|
||||
SAMTOOLS_VIEW ( input, [] )
|
||||
}
|
||||
|
|
|
@ -14,3 +14,11 @@
|
|||
- samtools
|
||||
files:
|
||||
- path: output/samtools/test.cram
|
||||
|
||||
- name: samtools view test_samtools_view_stubs
|
||||
command: nextflow run ./tests/modules/samtools/view -entry test_samtools_view -c ./tests/config/nextflow.config -c ./tests/modules/samtools/view/nextflow.config -stub-run
|
||||
tags:
|
||||
- samtools/view
|
||||
- samtools
|
||||
files:
|
||||
- path: output/samtools/test.bam
|
||||
|
|
53
tests/modules/srst2/srst2/main.nf
Normal file
53
tests/modules/srst2/srst2/main.nf
Normal file
|
@ -0,0 +1,53 @@
|
|||
#!/usr/bin/env nextflow
|
||||
|
||||
nextflow.enable.dsl = 2
|
||||
|
||||
include { SRST2_SRST2 } from '../../../../modules/srst2/srst2/main.nf'
|
||||
|
||||
workflow test_srst2_srst2_exit {
|
||||
|
||||
input = [
|
||||
[ id:'test', single_end:false, db:"test"], // meta map
|
||||
[ file(params.test_data['bacteroides_fragilis']['illumina']['test1_1_fastq_gz'], checkIfExists: true),
|
||||
file(params.test_data['bacteroides_fragilis']['illumina']['test1_2_fastq_gz'], checkIfExists: true) ],
|
||||
// [("")]
|
||||
file('https://raw.githubusercontent.com/nf-core/test-datasets/modules/data/delete_me/srst2/resFinder_20180221_srst2.fasta')
|
||||
]
|
||||
|
||||
SRST2_SRST2(input)
|
||||
}
|
||||
|
||||
workflow test_srst2_srst2_mlst {
|
||||
|
||||
input = [
|
||||
[ id:'test', single_end:false, db:"mlst"], // meta map
|
||||
[ file("https://raw.githubusercontent.com/nf-core/test-datasets/modules/data/delete_me/srst2/SRR9067271_1.fastq.gz", checkIfExists: true),
|
||||
file("https://raw.githubusercontent.com/nf-core/test-datasets/modules/data/delete_me/srst2/SRR9067271_2.fastq.gz", checkIfExists: true) ],
|
||||
file('https://raw.githubusercontent.com/nf-core/test-datasets/modules/data/delete_me/srst2/MLST_DB.fas')
|
||||
]
|
||||
|
||||
SRST2_SRST2(input)
|
||||
}
|
||||
|
||||
workflow test_srst2_srst2_paired_end {
|
||||
|
||||
input = [
|
||||
[ id:'test', single_end:false, db:"gene"], // meta map
|
||||
[ file(params.test_data['bacteroides_fragilis']['illumina']['test1_1_fastq_gz'], checkIfExists: true),
|
||||
file(params.test_data['bacteroides_fragilis']['illumina']['test1_2_fastq_gz'], checkIfExists: true) ],
|
||||
file('https://raw.githubusercontent.com/nf-core/test-datasets/modules/data/delete_me/srst2/resFinder_20180221_srst2.fasta') // Change to params.test_data syntax after the data is included in tests/config/test_data.config
|
||||
]
|
||||
|
||||
SRST2_SRST2(input)
|
||||
}
|
||||
|
||||
workflow test_srst2_srst2_single_end {
|
||||
|
||||
input = [
|
||||
[ id:'test', single_end:true, db:"gene" ], // meta map
|
||||
file(params.test_data['bacteroides_fragilis']['illumina']['test1_1_fastq_gz'], checkIfExists: true),
|
||||
file('https://raw.githubusercontent.com/nf-core/test-datasets/modules/data/delete_me/srst2/resFinder_20180221_srst2.fasta') // Change to params.test_data syntax after the data is included in tests/config/test_data.config
|
||||
]
|
||||
|
||||
SRST2_SRST2(input)
|
||||
}
|
5
tests/modules/srst2/srst2/nextflow.config
Normal file
5
tests/modules/srst2/srst2/nextflow.config
Normal file
|
@ -0,0 +1,5 @@
|
|||
process {
|
||||
|
||||
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
||||
|
||||
}
|
51
tests/modules/srst2/srst2/test.yml
Normal file
51
tests/modules/srst2/srst2/test.yml
Normal file
|
@ -0,0 +1,51 @@
|
|||
- name: srst2 srst2 test_srst2_srst2_exit #Testing pipeline exit when not meta.db
|
||||
command: nextflow run tests/modules/srst2/srst2 -entry test_srst2_srst2_exit -c tests/config/nextflow.config
|
||||
tags:
|
||||
- srst2/srst2
|
||||
- srst2
|
||||
exit_code: 1
|
||||
|
||||
- name: srst2 srst2 test_srst2_srst2_mlst
|
||||
command: nextflow run tests/modules/srst2/srst2 -entry test_srst2_srst2_mlst -c tests/config/nextflow.config
|
||||
tags:
|
||||
- srst2/srst2
|
||||
- srst2
|
||||
files:
|
||||
- path: output/srst2/test__SRR9067271.MLST_DB.pileup
|
||||
contains:
|
||||
- "dnaJ-1 2 C 17 .........,....... FFFFFFFFFFFFFFFFF"
|
||||
- path: output/srst2/test__SRR9067271.MLST_DB.sorted.bam
|
||||
- path: output/srst2/test__mlst__MLST_DB__results.txt
|
||||
md5sum: ec1b1f69933401d67c57f64cad11a098
|
||||
- path: output/srst2/versions.yml
|
||||
md5sum: a0c256a2fd3636069710b8ef22ee5ea7
|
||||
|
||||
- name: srst2 srst2 test_srst2_srst2_paired_end
|
||||
command: nextflow run tests/modules/srst2/srst2 -entry test_srst2_srst2_paired_end -c tests/config/nextflow.config
|
||||
tags:
|
||||
- srst2/srst2
|
||||
- srst2
|
||||
files:
|
||||
- path: output/srst2/test__genes__resFinder_20180221_srst2__results.txt
|
||||
md5sum: 099aa6cacec5524b311f606debdfb3a9
|
||||
- path: output/srst2/test__test1.resFinder_20180221_srst2.pileup
|
||||
md5sum: 64b512ff495b828c456405ec7b676ad1
|
||||
- path: output/srst2/test__test1.resFinder_20180221_srst2.sorted.bam
|
||||
- path: output/srst2/versions.yml
|
||||
md5sum: b446a70f1a2b4f60757829bcd744a214
|
||||
|
||||
- name: srst2 srst2 test_srst2_srst2_single_end
|
||||
command: nextflow run tests/modules/srst2/srst2 -entry test_srst2_srst2_single_end -c tests/config/nextflow.config
|
||||
tags:
|
||||
- srst2/srst2
|
||||
- srst2
|
||||
files:
|
||||
- path: output/srst2/test__fullgenes__resFinder_20180221_srst2__results.txt
|
||||
md5sum: d0762ef8c38afd0e0a34cce52ed1a3db
|
||||
- path: output/srst2/test__genes__resFinder_20180221_srst2__results.txt
|
||||
md5sum: b8850c6644406d8b131e471ecc3f9013
|
||||
- path: output/srst2/test__test1_1.resFinder_20180221_srst2.pileup
|
||||
md5sum: 5f6279dc8124aa762a9dfe3d7a871277
|
||||
- path: output/srst2/test__test1_1.resFinder_20180221_srst2.sorted.bam
|
||||
- path: output/srst2/versions.yml
|
||||
md5sum: 790fe00493c6634d17801a930073218b
|
23
tests/modules/vardictjava/main.nf
Normal file
23
tests/modules/vardictjava/main.nf
Normal file
|
@ -0,0 +1,23 @@
|
|||
#!/usr/bin/env nextflow
|
||||
|
||||
nextflow.enable.dsl = 2
|
||||
|
||||
include { VARDICTJAVA } from '../../../modules/vardictjava/main.nf'
|
||||
|
||||
workflow test_vardictjava {
|
||||
|
||||
bam_input_ch = Channel.value([
|
||||
[ id:'test' ], // meta map
|
||||
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
|
||||
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam_bai'], checkIfExists: true)
|
||||
])
|
||||
|
||||
bed = Channel.value(file(params.test_data['homo_sapiens']['genome']['genome_bed'], checkIfExists: true))
|
||||
|
||||
reference = Channel.value([
|
||||
file(params.test_data['homo_sapiens']['genome']['genome_fasta'], checkIfExists: true),
|
||||
file(params.test_data['homo_sapiens']['genome']['genome_fasta_fai'], checkIfExists: true)
|
||||
])
|
||||
|
||||
VARDICTJAVA ( bam_input_ch, bed, reference )
|
||||
}
|
5
tests/modules/vardictjava/nextflow.config
Normal file
5
tests/modules/vardictjava/nextflow.config
Normal file
|
@ -0,0 +1,5 @@
|
|||
process {
|
||||
|
||||
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
||||
|
||||
}
|
9
tests/modules/vardictjava/test.yml
Normal file
9
tests/modules/vardictjava/test.yml
Normal file
|
@ -0,0 +1,9 @@
|
|||
- name: vardictjava test_vardictjava
|
||||
command: nextflow run tests/modules/vardictjava -entry test_vardictjava -c tests/config/nextflow.config
|
||||
tags:
|
||||
- vardictjava
|
||||
files:
|
||||
- path: output/vardictjava/test.vcf.gz
|
||||
md5sum: 3f1f227afc532bddeb58f16fd3013fc8
|
||||
- path: output/vardictjava/versions.yml
|
||||
md5sum: 9b62c431a4f2680412b61c7071bdb1cd
|
27
tests/subworkflows/nf-core/bam_qc_picard/main.nf
Normal file
27
tests/subworkflows/nf-core/bam_qc_picard/main.nf
Normal file
|
@ -0,0 +1,27 @@
|
|||
#!/usr/bin/env nextflow
|
||||
|
||||
nextflow.enable.dsl = 2
|
||||
|
||||
include { BAM_QC_PICARD } from '../../../../subworkflows/nf-core/bam_qc_picard/main' addParams([:])
|
||||
|
||||
workflow test_bam_qc_picard_wgs {
|
||||
input = [ [ id:'test', single_end:false ], // meta map
|
||||
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
|
||||
]
|
||||
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
|
||||
fasta_fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)
|
||||
|
||||
BAM_QC_PICARD ( input, fasta, fasta_fai, [], [] )
|
||||
}
|
||||
|
||||
workflow test_bam_qc_picard_targetted {
|
||||
input = [ [ id:'test', single_end:false ], // meta map
|
||||
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
|
||||
]
|
||||
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
|
||||
fasta_fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)
|
||||
bait = file(params.test_data['sarscov2']['genome']['baits_interval_list'], checkIfExists: true)
|
||||
target = file(params.test_data['sarscov2']['genome']['targets_interval_list'], checkIfExists: true)
|
||||
|
||||
BAM_QC_PICARD ( input, fasta, fasta_fai, bait, target )
|
||||
}
|
5
tests/subworkflows/nf-core/bam_qc_picard/nextflow.config
Normal file
5
tests/subworkflows/nf-core/bam_qc_picard/nextflow.config
Normal file
|
@ -0,0 +1,5 @@
|
|||
process {
|
||||
|
||||
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
||||
|
||||
}
|
33
tests/subworkflows/nf-core/bam_qc_picard/test.yml
Normal file
33
tests/subworkflows/nf-core/bam_qc_picard/test.yml
Normal file
|
@ -0,0 +1,33 @@
|
|||
- name: bam qc picard wgs
|
||||
command: nextflow run ./tests/subworkflows/nf-core/bam_qc_picard -entry test_bam_qc_picard_wgs -c tests/config/nextflow.config
|
||||
tags:
|
||||
- subworkflows
|
||||
# - subworkflows/bam_qc_picard
|
||||
# Modules
|
||||
# - picard
|
||||
# - picard/collectmultiplemetrics
|
||||
# - picard/collectwgsmetrics
|
||||
files:
|
||||
- path: ./output/picard/test.CollectMultipleMetrics.alignment_summary_metrics
|
||||
- path: ./output/picard/test.CollectMultipleMetrics.insert_size_metrics
|
||||
- path: ./output/picard/test.CollectMultipleMetrics.base_distribution_by_cycle_metrics
|
||||
- path: ./output/picard/test.CollectMultipleMetrics.quality_by_cycle_metrics
|
||||
- path: ./output/picard/test.CollectMultipleMetrics.quality_distribution_metrics
|
||||
- path: ./output/picard/test.CollectWgsMetrics.coverage_metrics
|
||||
|
||||
- name: bam qc picard targetted
|
||||
command: nextflow run ./tests/subworkflows/nf-core/bam_qc_picard -entry test_bam_qc_picard_targetted -c tests/config/nextflow.config
|
||||
tags:
|
||||
- subworkflows
|
||||
# - subworkflows/bam_qc_picard
|
||||
# Modules
|
||||
# - picard
|
||||
# - picard/collectmultiplemetrics
|
||||
# - picard/collecthsmetrics
|
||||
files:
|
||||
- path: ./output/picard/test.CollectMultipleMetrics.alignment_summary_metrics
|
||||
- path: ./output/picard/test.CollectMultipleMetrics.insert_size_metrics
|
||||
- path: ./output/picard/test.CollectMultipleMetrics.base_distribution_by_cycle_metrics
|
||||
- path: ./output/picard/test.CollectMultipleMetrics.quality_by_cycle_metrics
|
||||
- path: ./output/picard/test.CollectMultipleMetrics.quality_distribution_metrics
|
||||
- path: ./output/picard/test.CollectHsMetrics.coverage_metrics
|
Loading…
Reference in a new issue