mirror of
https://github.com/MillironX/nf-core_modules.git
synced 2024-11-14 13:43:09 +00:00
Merge branch 'master' into srst2/srst2
This commit is contained in:
commit
c56b371713
59 changed files with 763 additions and 292 deletions
64
.github/ISSUE_TEMPLATE/bug_report.md
vendored
64
.github/ISSUE_TEMPLATE/bug_report.md
vendored
|
@ -1,64 +0,0 @@
|
||||||
---
|
|
||||||
name: Bug report
|
|
||||||
about: Report something that is broken or incorrect
|
|
||||||
title: "[BUG]"
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
# nf-core/module bug report
|
|
||||||
|
|
||||||
Hi there!
|
|
||||||
|
|
||||||
Thanks for telling us about a problem with the modules.
|
|
||||||
Please delete this text and anything that's not relevant from the template below:
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Check Documentation
|
|
||||||
|
|
||||||
I have checked the following places for your error:
|
|
||||||
|
|
||||||
- [ ] [nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)
|
|
||||||
- [ ] [nf-core/module documentation](https://github.com/nf-core/modules/blob/master/README.md)
|
|
||||||
|
|
||||||
## Description of the bug
|
|
||||||
|
|
||||||
<!-- A clear and concise description of what the bug is. -->
|
|
||||||
|
|
||||||
## Steps to reproduce
|
|
||||||
|
|
||||||
Steps to reproduce the behaviour:
|
|
||||||
|
|
||||||
1. Command line: <!-- [e.g. `nextflow run ...`] -->
|
|
||||||
2. See error: <!-- [Please provide your error message] -->
|
|
||||||
|
|
||||||
## Expected behaviour
|
|
||||||
|
|
||||||
<!-- A clear and concise description of what you expected to happen. -->
|
|
||||||
|
|
||||||
## Log files
|
|
||||||
|
|
||||||
Have you provided the following extra information/files:
|
|
||||||
|
|
||||||
- [ ] The command used to run the module
|
|
||||||
- [ ] The `.nextflow.log` file <!-- this is a hidden file in the directory where you launched the module -->
|
|
||||||
|
|
||||||
## System
|
|
||||||
|
|
||||||
- Hardware: <!-- [e.g. HPC, Desktop, Cloud...] -->
|
|
||||||
- Executor: <!-- [e.g. slurm, local, awsbatch...] -->
|
|
||||||
- OS: <!-- [e.g. CentOS Linux, macOS, Linux Mint...] -->
|
|
||||||
- Version <!-- [e.g. 7, 10.13.6, 18.3...] -->
|
|
||||||
|
|
||||||
## Nextflow Installation
|
|
||||||
|
|
||||||
- Version: <!-- [e.g. 19.10.0] -->
|
|
||||||
|
|
||||||
## Container engine
|
|
||||||
|
|
||||||
- Engine: <!-- [e.g. Conda, Docker, Singularity or Podman] -->
|
|
||||||
- version: <!-- [e.g. 1.0.0] -->
|
|
||||||
- Image tag: <!-- [e.g. nfcore/module:2.6] -->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!-- Add any other context about the problem here. -->
|
|
52
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
52
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
|
@ -0,0 +1,52 @@
|
||||||
|
name: Bug report
|
||||||
|
description: Report something that is broken or incorrect
|
||||||
|
labels: bug
|
||||||
|
body:
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Have you checked the docs?
|
||||||
|
description: I have checked the following places for my error
|
||||||
|
options:
|
||||||
|
- label: "[nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)"
|
||||||
|
required: true
|
||||||
|
- label: "[nf-core modules documentation](https://nf-co.re/docs/contributing/modules)"
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: description
|
||||||
|
attributes:
|
||||||
|
label: Description of the bug
|
||||||
|
description: A clear and concise description of what the bug is.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: command_used
|
||||||
|
attributes:
|
||||||
|
label: Command used and terminal output
|
||||||
|
description: Steps to reproduce the behaviour. Please paste the command you used to launch the pipeline and the output from your terminal.
|
||||||
|
render: console
|
||||||
|
placeholder: |
|
||||||
|
$ nextflow run ...
|
||||||
|
|
||||||
|
Some output where something broke
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: files
|
||||||
|
attributes:
|
||||||
|
label: Relevant files
|
||||||
|
description: |
|
||||||
|
Please drag and drop the relevant files here. Create a `.zip` archive if the extension is not allowed.
|
||||||
|
Your verbose log file `.nextflow.log` is often useful _(this is a hidden file in the directory where you launched the pipeline)_ as well as custom Nextflow configuration files.
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: system
|
||||||
|
attributes:
|
||||||
|
label: System information
|
||||||
|
description: |
|
||||||
|
* Nextflow version _(eg. 21.10.3)_
|
||||||
|
* Hardware _(eg. HPC, Desktop, Cloud)_
|
||||||
|
* Executor _(eg. slurm, local, awsbatch)_
|
||||||
|
* Container engine and version: _(e.g. Docker 1.0.0, Singularity, Conda, Podman, Shifter or Charliecloud)_
|
||||||
|
* OS and version: _(eg. CentOS Linux, macOS, Ubuntu 22.04)_
|
||||||
|
* Image tag: <!-- [e.g. nfcore/cellranger:2.6] -->
|
32
.github/ISSUE_TEMPLATE/feature_request.md
vendored
32
.github/ISSUE_TEMPLATE/feature_request.md
vendored
|
@ -1,32 +0,0 @@
|
||||||
---
|
|
||||||
name: Feature request
|
|
||||||
about: Suggest an idea for nf-core/modules
|
|
||||||
title: "[FEATURE]"
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
# nf-core/modules feature request
|
|
||||||
|
|
||||||
Hi there!
|
|
||||||
|
|
||||||
Thanks for suggesting a new feature for the modules!
|
|
||||||
Please delete this text and anything that's not relevant from the template below:
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Is your feature request related to a problem? Please describe
|
|
||||||
|
|
||||||
<!-- A clear and concise description of what the problem is. -->
|
|
||||||
|
|
||||||
<!-- e.g. [I'm always frustrated when ...] -->
|
|
||||||
|
|
||||||
## Describe the solution you'd like
|
|
||||||
|
|
||||||
<!-- A clear and concise description of what you want to happen. -->
|
|
||||||
|
|
||||||
## Describe alternatives you've considered
|
|
||||||
|
|
||||||
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!-- Add any other context about the feature request here. -->
|
|
32
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
32
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
|
@ -0,0 +1,32 @@
|
||||||
|
name: Feature request
|
||||||
|
description: Suggest an idea for nf-core/modules
|
||||||
|
labels: feature
|
||||||
|
title: "[FEATURE]"
|
||||||
|
body:
|
||||||
|
- type: textarea
|
||||||
|
id: description
|
||||||
|
attributes:
|
||||||
|
label: Is your feature request related to a problem? Please describe
|
||||||
|
description: A clear and concise description of what the bug is.
|
||||||
|
placeholder: |
|
||||||
|
<!-- e.g. [I'm always frustrated when ...] -->
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: solution
|
||||||
|
attributes:
|
||||||
|
label: Describe the solution you'd like
|
||||||
|
description: A clear and concise description of the solution you want to happen.
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: alternatives
|
||||||
|
attributes:
|
||||||
|
label: Describe alternatives you've considered
|
||||||
|
description: A clear and concise description of any alternative solutions or features you've considered.
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: additional_context
|
||||||
|
attributes:
|
||||||
|
label: Additional context
|
||||||
|
description: Add any other context about the feature request here.
|
26
.github/ISSUE_TEMPLATE/new_module.md
vendored
26
.github/ISSUE_TEMPLATE/new_module.md
vendored
|
@ -1,26 +0,0 @@
|
||||||
---
|
|
||||||
name: New module
|
|
||||||
about: Suggest a new module for nf-core/modules
|
|
||||||
title: "new module: TOOL/SUBTOOL"
|
|
||||||
label: new module
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
# nf-core/modules new module suggestion
|
|
||||||
|
|
||||||
Hi there!
|
|
||||||
|
|
||||||
Thanks for suggesting a new module for the modules!
|
|
||||||
Please delete this text and anything that's not relevant from the template below:
|
|
||||||
|
|
||||||
Replace TOOL with the bioconda name for the tool in the following text, so that the link is functional.
|
|
||||||
|
|
||||||
Replace TOOL/SUBTOOL in the issue title so that it's understandable.
|
|
||||||
-->
|
|
||||||
|
|
||||||
I think it would be good to have a module for [TOOL](https://bioconda.github.io/recipes/TOOL/README.html)
|
|
||||||
|
|
||||||
- [ ] This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
|
|
||||||
- [ ] There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
|
|
||||||
- [ ] There is no [open issue](https://github.com/nf-core/modules/issues) for this module
|
|
||||||
- [ ] If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
|
|
36
.github/ISSUE_TEMPLATE/new_module.yml
vendored
Normal file
36
.github/ISSUE_TEMPLATE/new_module.yml
vendored
Normal file
|
@ -0,0 +1,36 @@
|
||||||
|
name: New module
|
||||||
|
description: Suggest a new module for nf-core/modules
|
||||||
|
title: "new module: TOOL/SUBTOOL"
|
||||||
|
labels: new module
|
||||||
|
body:
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Is there an existing module for this?
|
||||||
|
description: This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
|
||||||
|
options:
|
||||||
|
- label: I have searched for the existing module
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Is there an open PR for this?
|
||||||
|
description: There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
|
||||||
|
options:
|
||||||
|
- label: I have searched for existing PRs
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Is there an open issue for this?
|
||||||
|
description: There is no [open issue](https://github.com/nf-core/modules/issues) for this module
|
||||||
|
options:
|
||||||
|
- label: I have searched for existing issues
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Are you going to work on this?
|
||||||
|
description: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
|
||||||
|
options:
|
||||||
|
- label: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
|
||||||
|
required: false
|
|
@ -27,9 +27,7 @@ process ANTISMASH_ANTISMASHLITEDOWNLOADDATABASES {
|
||||||
|
|
||||||
output:
|
output:
|
||||||
path("antismash_db") , emit: database
|
path("antismash_db") , emit: database
|
||||||
path("css"), emit: css_dir
|
path("antismash_dir"), emit: antismash_dir
|
||||||
path("detection"), emit: detection_dir
|
|
||||||
path("modules"), emit: modules_dir
|
|
||||||
path "versions.yml", emit: versions
|
path "versions.yml", emit: versions
|
||||||
|
|
||||||
when:
|
when:
|
||||||
|
@ -37,11 +35,19 @@ process ANTISMASH_ANTISMASHLITEDOWNLOADDATABASES {
|
||||||
|
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
|
conda = params.enable_conda
|
||||||
"""
|
"""
|
||||||
download-antismash-databases \\
|
download-antismash-databases \\
|
||||||
--database-dir antismash_db \\
|
--database-dir antismash_db \\
|
||||||
$args
|
$args
|
||||||
|
|
||||||
|
if [[ $conda = false ]]; \
|
||||||
|
then \
|
||||||
|
cp -r /usr/local/lib/python3.8/site-packages/antismash antismash_dir; \
|
||||||
|
else \
|
||||||
|
cp -r \$(python -c 'import antismash;print(antismash.__file__.split("/__")[0])') antismash_dir; \
|
||||||
|
fi
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
antismash-lite: \$(antismash --version | sed 's/antiSMASH //')
|
antismash-lite: \$(antismash --version | sed 's/antiSMASH //')
|
||||||
|
|
|
@ -50,21 +50,11 @@ output:
|
||||||
type: directory
|
type: directory
|
||||||
description: Download directory for antiSMASH databases
|
description: Download directory for antiSMASH databases
|
||||||
pattern: "antismash_db"
|
pattern: "antismash_db"
|
||||||
- css_dir:
|
- antismash_dir:
|
||||||
type: directory
|
type: directory
|
||||||
description: |
|
description: |
|
||||||
antismash/outputs/html/css folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
|
antismash installation folder which is being modified during the antiSMASH database downloading step. The modified files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database and installation folder in pipelines.
|
||||||
pattern: "css"
|
pattern: "antismash_dir"
|
||||||
- detection_dir:
|
|
||||||
type: directory
|
|
||||||
description: |
|
|
||||||
antismash/detection folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
|
|
||||||
pattern: "detection"
|
|
||||||
- modules_dir:
|
|
||||||
type: directory
|
|
||||||
description: |
|
|
||||||
antismash/modules folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
|
|
||||||
pattern: "modules"
|
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@jasmezz"
|
- "@jasmezz"
|
||||||
|
|
|
@ -2,10 +2,10 @@ process BAMTOOLS_SPLIT {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_low'
|
label 'process_low'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::bamtools=2.5.1" : null)
|
conda (params.enable_conda ? "bioconda::bamtools=2.5.2" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/bamtools:2.5.1--h9a82719_9' :
|
'https://depot.galaxyproject.org/singularity/bamtools:2.5.2--hd03093a_0' :
|
||||||
'quay.io/biocontainers/bamtools:2.5.1--h9a82719_9' }"
|
'quay.io/biocontainers/bamtools:2.5.2--hd03093a_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
@ -20,11 +20,15 @@ process BAMTOOLS_SPLIT {
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def input_list = bam.collect{"-in $it"}.join(' ')
|
||||||
"""
|
"""
|
||||||
bamtools \\
|
bamtools \\
|
||||||
split \\
|
merge \\
|
||||||
-in $bam \\
|
$input_list \\
|
||||||
$args
|
| bamtools \\
|
||||||
|
split \\
|
||||||
|
-stub $prefix \\
|
||||||
|
$args
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -23,7 +23,7 @@ input:
|
||||||
e.g. [ id:'test', single_end:false ]
|
e.g. [ id:'test', single_end:false ]
|
||||||
- bam:
|
- bam:
|
||||||
type: file
|
type: file
|
||||||
description: A BAM file to split
|
description: A list of one or more BAM files to merge and then split
|
||||||
pattern: "*.bam"
|
pattern: "*.bam"
|
||||||
|
|
||||||
output:
|
output:
|
||||||
|
@ -43,3 +43,4 @@ output:
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@sguizard"
|
- "@sguizard"
|
||||||
|
- "@matthdsm"
|
||||||
|
|
|
@ -2,20 +2,26 @@ process DIAMOND_BLASTP {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
// Dimaond is limited to v2.0.9 because there is not a
|
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
|
||||||
// singularity version higher than this at the current time.
|
|
||||||
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
|
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
|
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
|
||||||
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
|
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(fasta)
|
tuple val(meta), path(fasta)
|
||||||
path db
|
path db
|
||||||
|
val out_ext
|
||||||
|
val blast_columns
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path('*.txt'), emit: txt
|
tuple val(meta), path('*.blast'), optional: true, emit: blast
|
||||||
path "versions.yml" , emit: versions
|
tuple val(meta), path('*.xml') , optional: true, emit: xml
|
||||||
|
tuple val(meta), path('*.txt') , optional: true, emit: txt
|
||||||
|
tuple val(meta), path('*.daa') , optional: true, emit: daa
|
||||||
|
tuple val(meta), path('*.sam') , optional: true, emit: sam
|
||||||
|
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
|
||||||
|
tuple val(meta), path('*.paf') , optional: true, emit: paf
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
when:
|
when:
|
||||||
task.ext.when == null || task.ext.when
|
task.ext.when == null || task.ext.when
|
||||||
|
@ -23,6 +29,21 @@ process DIAMOND_BLASTP {
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def columns = blast_columns ? "${blast_columns}" : ''
|
||||||
|
switch ( out_ext ) {
|
||||||
|
case "blast": outfmt = 0; break
|
||||||
|
case "xml": outfmt = 5; break
|
||||||
|
case "txt": outfmt = 6; break
|
||||||
|
case "daa": outfmt = 100; break
|
||||||
|
case "sam": outfmt = 101; break
|
||||||
|
case "tsv": outfmt = 102; break
|
||||||
|
case "paf": outfmt = 103; break
|
||||||
|
default:
|
||||||
|
outfmt = '6';
|
||||||
|
out_ext = 'txt';
|
||||||
|
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
|
||||||
|
break
|
||||||
|
}
|
||||||
"""
|
"""
|
||||||
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
|
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
|
||||||
|
|
||||||
|
@ -31,8 +52,9 @@ process DIAMOND_BLASTP {
|
||||||
--threads $task.cpus \\
|
--threads $task.cpus \\
|
||||||
--db \$DB \\
|
--db \$DB \\
|
||||||
--query $fasta \\
|
--query $fasta \\
|
||||||
|
--outfmt ${outfmt} ${columns} \\
|
||||||
$args \\
|
$args \\
|
||||||
--out ${prefix}.txt
|
--out ${prefix}.${out_ext}
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -28,12 +28,50 @@ input:
|
||||||
type: directory
|
type: directory
|
||||||
description: Directory containing the protein blast database
|
description: Directory containing the protein blast database
|
||||||
pattern: "*"
|
pattern: "*"
|
||||||
|
- out_ext:
|
||||||
|
type: string
|
||||||
|
description: |
|
||||||
|
Specify the type of output file to be generated. `blast` corresponds to
|
||||||
|
BLAST pairwise format. `xml` corresponds to BLAST xml format.
|
||||||
|
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
|
||||||
|
taxonomic classification format.
|
||||||
|
pattern: "blast|xml|txt|daa|sam|tsv|paf"
|
||||||
|
- blast_columns:
|
||||||
|
type: string
|
||||||
|
description: |
|
||||||
|
Optional space separated list of DIAMOND tabular BLAST output keywords
|
||||||
|
used for in conjunction with the 'txt' out_ext option (--outfmt 6). See
|
||||||
|
DIAMOND documnetation for more information.
|
||||||
|
|
||||||
output:
|
output:
|
||||||
- txt:
|
- blast:
|
||||||
type: file
|
type: file
|
||||||
description: File containing blastp hits
|
description: File containing blastp hits
|
||||||
pattern: "*.{blastp.txt}"
|
pattern: "*.{blast}"
|
||||||
|
- xml:
|
||||||
|
type: file
|
||||||
|
description: File containing blastp hits
|
||||||
|
pattern: "*.{xml}"
|
||||||
|
- txt:
|
||||||
|
type: file
|
||||||
|
description: File containing hits in tabular BLAST format.
|
||||||
|
pattern: "*.{txt}"
|
||||||
|
- daa:
|
||||||
|
type: file
|
||||||
|
description: File containing hits DAA format
|
||||||
|
pattern: "*.{daa}"
|
||||||
|
- sam:
|
||||||
|
type: file
|
||||||
|
description: File containing aligned reads in SAM format
|
||||||
|
pattern: "*.{sam}"
|
||||||
|
- tsv:
|
||||||
|
type: file
|
||||||
|
description: Tab separated file containing taxonomic classification of hits
|
||||||
|
pattern: "*.{tsv}"
|
||||||
|
- paf:
|
||||||
|
type: file
|
||||||
|
description: File containing aligned reads in pairwise mapping format format
|
||||||
|
pattern: "*.{paf}"
|
||||||
- versions:
|
- versions:
|
||||||
type: file
|
type: file
|
||||||
description: File containing software versions
|
description: File containing software versions
|
||||||
|
@ -41,3 +79,4 @@ output:
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@spficklin"
|
- "@spficklin"
|
||||||
|
- "@jfy133"
|
||||||
|
|
|
@ -2,20 +2,26 @@ process DIAMOND_BLASTX {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
// Dimaond is limited to v2.0.9 because there is not a
|
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
|
||||||
// singularity version higher than this at the current time.
|
|
||||||
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
|
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
|
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
|
||||||
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
|
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(fasta)
|
tuple val(meta), path(fasta)
|
||||||
path db
|
path db
|
||||||
|
val out_ext
|
||||||
|
val blast_columns
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path('*.txt'), emit: txt
|
tuple val(meta), path('*.blast'), optional: true, emit: blast
|
||||||
path "versions.yml" , emit: versions
|
tuple val(meta), path('*.xml') , optional: true, emit: xml
|
||||||
|
tuple val(meta), path('*.txt') , optional: true, emit: txt
|
||||||
|
tuple val(meta), path('*.daa') , optional: true, emit: daa
|
||||||
|
tuple val(meta), path('*.sam') , optional: true, emit: sam
|
||||||
|
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
|
||||||
|
tuple val(meta), path('*.paf') , optional: true, emit: paf
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
when:
|
when:
|
||||||
task.ext.when == null || task.ext.when
|
task.ext.when == null || task.ext.when
|
||||||
|
@ -23,6 +29,21 @@ process DIAMOND_BLASTX {
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def columns = blast_columns ? "${blast_columns}" : ''
|
||||||
|
switch ( out_ext ) {
|
||||||
|
case "blast": outfmt = 0; break
|
||||||
|
case "xml": outfmt = 5; break
|
||||||
|
case "txt": outfmt = 6; break
|
||||||
|
case "daa": outfmt = 100; break
|
||||||
|
case "sam": outfmt = 101; break
|
||||||
|
case "tsv": outfmt = 102; break
|
||||||
|
case "paf": outfmt = 103; break
|
||||||
|
default:
|
||||||
|
outfmt = '6';
|
||||||
|
out_ext = 'txt';
|
||||||
|
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
|
||||||
|
break
|
||||||
|
}
|
||||||
"""
|
"""
|
||||||
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
|
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
|
||||||
|
|
||||||
|
@ -31,8 +52,9 @@ process DIAMOND_BLASTX {
|
||||||
--threads $task.cpus \\
|
--threads $task.cpus \\
|
||||||
--db \$DB \\
|
--db \$DB \\
|
||||||
--query $fasta \\
|
--query $fasta \\
|
||||||
|
--outfmt ${outfmt} ${columns} \\
|
||||||
$args \\
|
$args \\
|
||||||
--out ${prefix}.txt
|
--out ${prefix}.${out_ext}
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -28,12 +28,44 @@ input:
|
||||||
type: directory
|
type: directory
|
||||||
description: Directory containing the nucelotide blast database
|
description: Directory containing the nucelotide blast database
|
||||||
pattern: "*"
|
pattern: "*"
|
||||||
|
- out_ext:
|
||||||
|
type: string
|
||||||
|
description: |
|
||||||
|
Specify the type of output file to be generated. `blast` corresponds to
|
||||||
|
BLAST pairwise format. `xml` corresponds to BLAST xml format.
|
||||||
|
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
|
||||||
|
taxonomic classification format.
|
||||||
|
pattern: "blast|xml|txt|daa|sam|tsv|paf"
|
||||||
|
|
||||||
output:
|
output:
|
||||||
|
- blast:
|
||||||
|
type: file
|
||||||
|
description: File containing blastp hits
|
||||||
|
pattern: "*.{blast}"
|
||||||
|
- xml:
|
||||||
|
type: file
|
||||||
|
description: File containing blastp hits
|
||||||
|
pattern: "*.{xml}"
|
||||||
- txt:
|
- txt:
|
||||||
type: file
|
type: file
|
||||||
description: File containing blastx hits
|
description: File containing hits in tabular BLAST format.
|
||||||
pattern: "*.{blastx.txt}"
|
pattern: "*.{txt}"
|
||||||
|
- daa:
|
||||||
|
type: file
|
||||||
|
description: File containing hits DAA format
|
||||||
|
pattern: "*.{daa}"
|
||||||
|
- sam:
|
||||||
|
type: file
|
||||||
|
description: File containing aligned reads in SAM format
|
||||||
|
pattern: "*.{sam}"
|
||||||
|
- tsv:
|
||||||
|
type: file
|
||||||
|
description: Tab separated file containing taxonomic classification of hits
|
||||||
|
pattern: "*.{tsv}"
|
||||||
|
- paf:
|
||||||
|
type: file
|
||||||
|
description: File containing aligned reads in pairwise mapping format format
|
||||||
|
pattern: "*.{paf}"
|
||||||
- versions:
|
- versions:
|
||||||
type: file
|
type: file
|
||||||
description: File containing software versions
|
description: File containing software versions
|
||||||
|
@ -41,3 +73,4 @@ output:
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@spficklin"
|
- "@spficklin"
|
||||||
|
- "@jfy133"
|
||||||
|
|
|
@ -2,12 +2,10 @@ process DIAMOND_MAKEDB {
|
||||||
tag "$fasta"
|
tag "$fasta"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
// Dimaond is limited to v2.0.9 because there is not a
|
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
|
||||||
// singularity version higher than this at the current time.
|
|
||||||
conda (params.enable_conda ? 'bioconda::diamond=2.0.9' : null)
|
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
|
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
|
||||||
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
|
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
path fasta
|
path fasta
|
||||||
|
|
43
modules/elprep/merge/main.nf
Normal file
43
modules/elprep/merge/main.nf
Normal file
|
@ -0,0 +1,43 @@
|
||||||
|
process ELPREP_MERGE {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::elprep=5.1.2" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/elprep:5.1.2--he881be0_0':
|
||||||
|
'quay.io/biocontainers/elprep:5.1.2--he881be0_0' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(bam)
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("output/**.{bam,sam}") , emit: bam
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def suffix = args.contains("--output-type sam") ? "sam" : "bam"
|
||||||
|
def single_end = meta.single_end ? " --single-end" : ""
|
||||||
|
|
||||||
|
"""
|
||||||
|
# create directory and move all input so elprep can find and merge them before splitting
|
||||||
|
mkdir input
|
||||||
|
mv ${bam} input/
|
||||||
|
|
||||||
|
elprep merge \\
|
||||||
|
input/ \\
|
||||||
|
output/${prefix}.${suffix} \\
|
||||||
|
$args \\
|
||||||
|
${single_end} \\
|
||||||
|
--nr-of-threads $task.cpus
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
elprep: \$(elprep 2>&1 | head -n2 | tail -n1 |sed 's/^.*version //;s/ compiled.*\$//')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
44
modules/elprep/merge/meta.yml
Normal file
44
modules/elprep/merge/meta.yml
Normal file
|
@ -0,0 +1,44 @@
|
||||||
|
name: "elprep_merge"
|
||||||
|
description: Merge split bam/sam chunks in one file
|
||||||
|
keywords:
|
||||||
|
- bam
|
||||||
|
- sam
|
||||||
|
- merge
|
||||||
|
tools:
|
||||||
|
- "elprep":
|
||||||
|
description: "elPrep is a high-performance tool for preparing .sam/.bam files for variant calling in sequencing pipelines. It can be used as a drop-in replacement for SAMtools/Picard/GATK4."
|
||||||
|
homepage: "https://github.com/ExaScience/elprep"
|
||||||
|
documentation: "https://github.com/ExaScience/elprep"
|
||||||
|
tool_dev_url: "https://github.com/ExaScience/elprep"
|
||||||
|
doi: "10.1371/journal.pone.0244471"
|
||||||
|
licence: "['AGPL v3']"
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: List of BAM/SAM chunks to merge
|
||||||
|
pattern: "*.{bam,sam}"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
#
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: Merged BAM/SAM file
|
||||||
|
pattern: "*.{bam,sam}"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@matthdsm"
|
|
@ -12,7 +12,7 @@ process GATK4_MARKDUPLICATES {
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path("*.bam") , emit: bam
|
tuple val(meta), path("*.bam") , emit: bam
|
||||||
tuple val(meta), path("*.bai") , emit: bai
|
tuple val(meta), path("*.bai") , optional:true, emit: bai
|
||||||
tuple val(meta), path("*.metrics"), emit: metrics
|
tuple val(meta), path("*.metrics"), emit: metrics
|
||||||
path "versions.yml" , emit: versions
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
|
|
@ -27,8 +27,8 @@ process MINIMAP2_ALIGN {
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
def input_reads = meta.single_end ? "$reads" : "${reads[0]} ${reads[1]}"
|
def input_reads = meta.single_end ? "$reads" : "${reads[0]} ${reads[1]}"
|
||||||
def bam_output = bam_format ? "-a | samtools sort | samtools view -@ ${task.cpus} -b -h -o ${prefix}.bam" : "-o ${prefix}.paf"
|
def bam_output = bam_format ? "-a | samtools sort | samtools view -@ ${task.cpus} -b -h -o ${prefix}.bam" : "-o ${prefix}.paf"
|
||||||
def cigar_paf = cigar_paf_format && !sam_format ? "-c" : ''
|
def cigar_paf = cigar_paf_format && !bam_format ? "-c" : ''
|
||||||
def set_cigar_bam = cigar_bam && sam_format ? "-L" : ''
|
def set_cigar_bam = cigar_bam && bam_format ? "-L" : ''
|
||||||
"""
|
"""
|
||||||
minimap2 \\
|
minimap2 \\
|
||||||
$args \\
|
$args \\
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_ADDORREPLACEREADGROUPS {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_low'
|
label 'process_low'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
@ -38,12 +38,12 @@ process PICARD_ADDORREPLACEREADGROUPS {
|
||||||
-Xmx${avail_mem}g \\
|
-Xmx${avail_mem}g \\
|
||||||
--INPUT ${bam} \\
|
--INPUT ${bam} \\
|
||||||
--OUTPUT ${prefix}.bam \\
|
--OUTPUT ${prefix}.bam \\
|
||||||
-ID ${ID} \\
|
--RGID ${ID} \\
|
||||||
-LB ${LIBRARY} \\
|
--RGLB ${LIBRARY} \\
|
||||||
-PL ${PLATFORM} \\
|
--RGPL ${PLATFORM} \\
|
||||||
-PU ${BARCODE} \\
|
--RGPU ${BARCODE} \\
|
||||||
-SM ${SAMPLE} \\
|
--RGSM ${SAMPLE} \\
|
||||||
-CREATE_INDEX true
|
--CREATE_INDEX true
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_CLEANSAM {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
@ -31,8 +31,8 @@ process PICARD_CLEANSAM {
|
||||||
-Xmx${avail_mem}g \\
|
-Xmx${avail_mem}g \\
|
||||||
CleanSam \\
|
CleanSam \\
|
||||||
${args} \\
|
${args} \\
|
||||||
-I ${bam} \\
|
--INPUT ${bam} \\
|
||||||
-O ${prefix}.bam
|
--OUTPUT ${prefix}.bam
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_COLLECTHSMETRICS {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
@ -38,10 +38,10 @@ process PICARD_COLLECTHSMETRICS {
|
||||||
CollectHsMetrics \\
|
CollectHsMetrics \\
|
||||||
$args \\
|
$args \\
|
||||||
$reference \\
|
$reference \\
|
||||||
-BAIT_INTERVALS $bait_intervals \\
|
--BAIT_INTERVALS $bait_intervals \\
|
||||||
-TARGET_INTERVALS $target_intervals \\
|
--TARGET_INTERVALS $target_intervals \\
|
||||||
-INPUT $bam \\
|
--INPUT $bam \\
|
||||||
-OUTPUT ${prefix}.CollectHsMetrics.coverage_metrics
|
--OUTPUT ${prefix}.CollectHsMetrics.coverage_metrics
|
||||||
|
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_COLLECTMULTIPLEMETRICS {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
@ -33,9 +33,9 @@ process PICARD_COLLECTMULTIPLEMETRICS {
|
||||||
-Xmx${avail_mem}g \\
|
-Xmx${avail_mem}g \\
|
||||||
CollectMultipleMetrics \\
|
CollectMultipleMetrics \\
|
||||||
$args \\
|
$args \\
|
||||||
INPUT=$bam \\
|
--INPUT $bam \\
|
||||||
OUTPUT=${prefix}.CollectMultipleMetrics \\
|
--OUTPUT ${prefix}.CollectMultipleMetrics \\
|
||||||
REFERENCE_SEQUENCE=$fasta
|
--REFERENCE_SEQUENCE $fasta
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -2,13 +2,13 @@ process PICARD_COLLECTWGSMETRICS {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam), path(bai)
|
tuple val(meta), path(bam)
|
||||||
path fasta
|
path fasta
|
||||||
|
|
||||||
output:
|
output:
|
||||||
|
@ -32,9 +32,10 @@ process PICARD_COLLECTWGSMETRICS {
|
||||||
-Xmx${avail_mem}g \\
|
-Xmx${avail_mem}g \\
|
||||||
CollectWgsMetrics \\
|
CollectWgsMetrics \\
|
||||||
$args \\
|
$args \\
|
||||||
INPUT=$bam \\
|
--INPUT $bam \\
|
||||||
OUTPUT=${prefix}.CollectWgsMetrics.coverage_metrics \\
|
--OUTPUT ${prefix}.CollectWgsMetrics.coverage_metrics \\
|
||||||
REFERENCE_SEQUENCE=$fasta
|
--REFERENCE_SEQUENCE $fasta
|
||||||
|
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_CREATESEQUENCEDICTIONARY {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(fasta)
|
tuple val(meta), path(fasta)
|
||||||
|
@ -31,8 +31,8 @@ process PICARD_CREATESEQUENCEDICTIONARY {
|
||||||
-Xmx${avail_mem}g \\
|
-Xmx${avail_mem}g \\
|
||||||
CreateSequenceDictionary \\
|
CreateSequenceDictionary \\
|
||||||
$args \\
|
$args \\
|
||||||
R=$fasta \\
|
--REFERENCE $fasta \\
|
||||||
O=${prefix}.dict
|
--OUTPUT ${prefix}.dict
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_CROSSCHECKFINGERPRINTS {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(input1)
|
tuple val(meta), path(input1)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_FILTERSAMREADS {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_low'
|
label 'process_low'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam), path(readlist)
|
tuple val(meta), path(bam), path(readlist)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_FIXMATEINFORMATION {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_low'
|
label 'process_low'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
@ -31,8 +31,8 @@ process PICARD_FIXMATEINFORMATION {
|
||||||
picard \\
|
picard \\
|
||||||
FixMateInformation \\
|
FixMateInformation \\
|
||||||
-Xmx${avail_mem}g \\
|
-Xmx${avail_mem}g \\
|
||||||
-I ${bam} \\
|
--INPUT ${bam} \\
|
||||||
-O ${prefix}.bam \\
|
--OUTPUT ${prefix}.bam \\
|
||||||
--VALIDATION_STRINGENCY ${STRINGENCY}
|
--VALIDATION_STRINGENCY ${STRINGENCY}
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_LIFTOVERVCF {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_low'
|
label 'process_low'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(input_vcf)
|
tuple val(meta), path(input_vcf)
|
||||||
|
@ -35,11 +35,11 @@ process PICARD_LIFTOVERVCF {
|
||||||
-Xmx${avail_mem}g \\
|
-Xmx${avail_mem}g \\
|
||||||
LiftoverVcf \\
|
LiftoverVcf \\
|
||||||
$args \\
|
$args \\
|
||||||
I=$input_vcf \\
|
--INPUT $input_vcf \\
|
||||||
O=${prefix}.lifted.vcf.gz \\
|
--OUTPUT ${prefix}.lifted.vcf.gz \\
|
||||||
CHAIN=$chain \\
|
--CHAIN $chain \\
|
||||||
REJECT=${prefix}.unlifted.vcf.gz \\
|
--REJECT ${prefix}.unlifted.vcf.gz \\
|
||||||
R=$fasta
|
--REFERENCE_SEQUENCE $fasta
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_MARKDUPLICATES {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
@ -33,9 +33,9 @@ process PICARD_MARKDUPLICATES {
|
||||||
-Xmx${avail_mem}g \\
|
-Xmx${avail_mem}g \\
|
||||||
MarkDuplicates \\
|
MarkDuplicates \\
|
||||||
$args \\
|
$args \\
|
||||||
I=$bam \\
|
--INPUT $bam \\
|
||||||
O=${prefix}.bam \\
|
--OUTPUT ${prefix}.bam \\
|
||||||
M=${prefix}.MarkDuplicates.metrics.txt
|
--METRICS_FILE ${prefix}.MarkDuplicates.metrics.txt
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_MERGESAMFILES {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bams)
|
tuple val(meta), path(bams)
|
||||||
|
@ -33,8 +33,8 @@ process PICARD_MERGESAMFILES {
|
||||||
-Xmx${avail_mem}g \\
|
-Xmx${avail_mem}g \\
|
||||||
MergeSamFiles \\
|
MergeSamFiles \\
|
||||||
$args \\
|
$args \\
|
||||||
${'INPUT='+bam_files.join(' INPUT=')} \\
|
${'--INPUT '+bam_files.join(' --INPUT ')} \\
|
||||||
OUTPUT=${prefix}.bam
|
--OUTPUT ${prefix}.bam
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
picard: \$( echo \$(picard MergeSamFiles --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)
|
picard: \$( echo \$(picard MergeSamFiles --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_SORTSAM {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_low'
|
label 'process_low'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process PICARD_SORTVCF {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
|
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
|
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(vcf)
|
tuple val(meta), path(vcf)
|
||||||
|
@ -22,8 +22,8 @@ process PICARD_SORTVCF {
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
def seq_dict = sequence_dict ? "-SEQUENCE_DICTIONARY $sequence_dict" : ""
|
def seq_dict = sequence_dict ? "--SEQUENCE_DICTIONARY $sequence_dict" : ""
|
||||||
def reference = reference ? "-REFERENCE_SEQUENCE $reference" : ""
|
def reference = reference ? "--REFERENCE_SEQUENCE $reference" : ""
|
||||||
def avail_mem = 3
|
def avail_mem = 3
|
||||||
if (!task.memory) {
|
if (!task.memory) {
|
||||||
log.info '[Picard SortVcf] Available memory not known - defaulting to 3GB. Specify process memory requirements to change this.'
|
log.info '[Picard SortVcf] Available memory not known - defaulting to 3GB. Specify process memory requirements to change this.'
|
||||||
|
|
41
subworkflows/nf-core/bam_qc_picard/main.nf
Normal file
41
subworkflows/nf-core/bam_qc_picard/main.nf
Normal file
|
@ -0,0 +1,41 @@
|
||||||
|
//
|
||||||
|
// Run QC steps on BAM/CRAM files using Picard
|
||||||
|
//
|
||||||
|
|
||||||
|
include { PICARD_COLLECTMULTIPLEMETRICS } from '../../../modules/picard/collectmultiplemetrics/main'
|
||||||
|
include { PICARD_COLLECTWGSMETRICS } from '../../../modules/picard/collectwgsmetrics/main'
|
||||||
|
include { PICARD_COLLECTHSMETRICS } from '../../../modules/picard/collecthsmetrics/main'
|
||||||
|
|
||||||
|
workflow BAM_QC_PICARD {
|
||||||
|
take:
|
||||||
|
ch_bam // channel: [ val(meta), [ bam ]]
|
||||||
|
ch_fasta // channel: [ fasta ]
|
||||||
|
ch_fasta_fai // channel: [ fasta_fai ]
|
||||||
|
ch_bait_interval // channel: [ bait_interval ]
|
||||||
|
ch_target_interval // channel: [ target_interval ]
|
||||||
|
|
||||||
|
main:
|
||||||
|
ch_versions = Channel.empty()
|
||||||
|
ch_coverage_metrics = Channel.empty()
|
||||||
|
|
||||||
|
PICARD_COLLECTMULTIPLEMETRICS( ch_bam, ch_fasta )
|
||||||
|
ch_versions = ch_versions.mix(PICARD_COLLECTMULTIPLEMETRICS.out.versions.first())
|
||||||
|
|
||||||
|
if (ch_bait_interval || ch_target_interval) {
|
||||||
|
if (!ch_bait_interval) log.error("Bait interval channel is empty")
|
||||||
|
if (!ch_target_interval) log.error("Target interval channel is empty")
|
||||||
|
PICARD_COLLECTHSMETRICS( ch_bam, ch_fasta, ch_fasta_fai, ch_bait_interval, ch_target_interval )
|
||||||
|
ch_coverage_metrics = ch_coverage_metrics.mix(PICARD_COLLECTHSMETRICS.out.metrics)
|
||||||
|
ch_versions = ch_versions.mix(PICARD_COLLECTHSMETRICS.out.versions.first())
|
||||||
|
} else {
|
||||||
|
PICARD_COLLECTWGSMETRICS( ch_bam, ch_fasta )
|
||||||
|
ch_versions = ch_versions.mix(PICARD_COLLECTWGSMETRICS.out.versions.first())
|
||||||
|
ch_coverage_metrics = ch_coverage_metrics.mix(PICARD_COLLECTWGSMETRICS.out.metrics)
|
||||||
|
}
|
||||||
|
|
||||||
|
emit:
|
||||||
|
coverage_metrics = ch_coverage_metrics // channel: [ val(meta), [ coverage_metrics ] ]
|
||||||
|
multiple_metrics = PICARD_COLLECTMULTIPLEMETRICS.out.metrics // channel: [ val(meta), [ multiple_metrics ] ]
|
||||||
|
|
||||||
|
versions = ch_versions // channel: [ versions.yml ]
|
||||||
|
}
|
60
subworkflows/nf-core/bam_qc_picard/meta.yml
Normal file
60
subworkflows/nf-core/bam_qc_picard/meta.yml
Normal file
|
@ -0,0 +1,60 @@
|
||||||
|
name: bam_qc
|
||||||
|
description: Produces comprehensive statistics from BAM file
|
||||||
|
keywords:
|
||||||
|
- statistics
|
||||||
|
- counts
|
||||||
|
- hs_metrics
|
||||||
|
- wgs_metrics
|
||||||
|
- bam
|
||||||
|
- sam
|
||||||
|
- cram
|
||||||
|
modules:
|
||||||
|
- picard/collectmultiplemetrics
|
||||||
|
- picard/collectwgsmetrics
|
||||||
|
- picard/collecthsmetrics
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: BAM/CRAM/SAM file
|
||||||
|
pattern: "*.{bam,cram,sam}"
|
||||||
|
- fasta:
|
||||||
|
type: optional file
|
||||||
|
description: Reference fasta file
|
||||||
|
pattern: "*.{fasta,fa}"
|
||||||
|
- fasta_fai:
|
||||||
|
type: optional file
|
||||||
|
description: Reference fasta file index
|
||||||
|
pattern: "*.{fasta,fa}.fai"
|
||||||
|
- bait_intervals:
|
||||||
|
type: optional file
|
||||||
|
description: An interval list file that contains the locations of the baits used.
|
||||||
|
pattern: "baits.interval_list"
|
||||||
|
- target_intervals:
|
||||||
|
type: optional file
|
||||||
|
description: An interval list file that contains the locations of the targets.
|
||||||
|
pattern: "targets.interval_list"
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- coverage_metrics:
|
||||||
|
type: file
|
||||||
|
description: Alignment metrics files generated by picard CollectHsMetrics or CollectWgsMetrics
|
||||||
|
pattern: "*_metrics.txt"
|
||||||
|
- multiple_metrics:
|
||||||
|
type: file
|
||||||
|
description: Alignment metrics files generated by picard CollectMultipleMetrics
|
||||||
|
pattern: "*_{metrics}"
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
authors:
|
||||||
|
- "@matthdsm"
|
|
@ -603,6 +603,10 @@ elprep/filter:
|
||||||
- modules/elprep/filter/**
|
- modules/elprep/filter/**
|
||||||
- tests/modules/elprep/filter/**
|
- tests/modules/elprep/filter/**
|
||||||
|
|
||||||
|
elprep/merge:
|
||||||
|
- modules/elprep/merge/**
|
||||||
|
- tests/modules/elprep/merge/**
|
||||||
|
|
||||||
elprep/split:
|
elprep/split:
|
||||||
- modules/elprep/split/**
|
- modules/elprep/split/**
|
||||||
- tests/modules/elprep/split/**
|
- tests/modules/elprep/split/**
|
||||||
|
|
|
@ -14,6 +14,7 @@ params {
|
||||||
genome_paf = "${test_data_dir}/genomics/sarscov2/genome/genome.paf"
|
genome_paf = "${test_data_dir}/genomics/sarscov2/genome/genome.paf"
|
||||||
genome_sizes = "${test_data_dir}/genomics/sarscov2/genome/genome.sizes"
|
genome_sizes = "${test_data_dir}/genomics/sarscov2/genome/genome.sizes"
|
||||||
transcriptome_fasta = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.fasta"
|
transcriptome_fasta = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.fasta"
|
||||||
|
proteome_fasta = "${test_data_dir}/genomics/sarscov2/genome/proteome.fasta"
|
||||||
transcriptome_paf = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.paf"
|
transcriptome_paf = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.paf"
|
||||||
|
|
||||||
test_bed = "${test_data_dir}/genomics/sarscov2/genome/bed/test.bed"
|
test_bed = "${test_data_dir}/genomics/sarscov2/genome/bed/test.bed"
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
- name: antismash antismashlitedownloaddatabases test_antismash_antismashlitedownloaddatabases
|
- name: antismash antismashlitedownloaddatabases test_antismash_antismashlitedownloaddatabases
|
||||||
command: nextflow run tests/modules/antismash/antismashlitedownloaddatabases -entry test_antismash_antismashlitedownloaddatabases -c tests/config/nextflow.config
|
command: nextflow run tests/modules/antismash/antismashlitedownloaddatabases -entry test_antismash_antismashlitedownloaddatabases -c tests/config/nextflow.config
|
||||||
tags:
|
tags:
|
||||||
- antismash
|
|
||||||
- antismash/antismashlitedownloaddatabases
|
- antismash/antismashlitedownloaddatabases
|
||||||
|
- antismash
|
||||||
files:
|
files:
|
||||||
- path: output/antismash/versions.yml
|
- path: output/antismash/versions.yml
|
||||||
md5sum: 24859c67023abab99de295d3675a24b6
|
md5sum: 24859c67023abab99de295d3675a24b6
|
||||||
|
@ -12,6 +12,5 @@
|
||||||
- path: output/antismash/antismash_db/pfam
|
- path: output/antismash/antismash_db/pfam
|
||||||
- path: output/antismash/antismash_db/resfam
|
- path: output/antismash/antismash_db/resfam
|
||||||
- path: output/antismash/antismash_db/tigrfam
|
- path: output/antismash/antismash_db/tigrfam
|
||||||
- path: output/antismash/css
|
- path: output/antismash/antismash_dir
|
||||||
- path: output/antismash/detection
|
- path: output/antismash/antismash_dir/detection/hmm_detection/data/bgc_seeds.hmm
|
||||||
- path: output/antismash/modules
|
|
||||||
|
|
|
@ -2,13 +2,29 @@
|
||||||
|
|
||||||
nextflow.enable.dsl = 2
|
nextflow.enable.dsl = 2
|
||||||
|
|
||||||
include { BAMTOOLS_SPLIT } from '../../../../modules/bamtools/split/main.nf'
|
include { BAMTOOLS_SPLIT as BAMTOOLS_SPLIT_SINGLE } from '../../../../modules/bamtools/split/main.nf'
|
||||||
|
include { BAMTOOLS_SPLIT as BAMTOOLS_SPLIT_MULTIPLE } from '../../../../modules/bamtools/split/main.nf'
|
||||||
|
|
||||||
workflow test_bamtools_split {
|
workflow test_bamtools_split_single_input {
|
||||||
|
|
||||||
input = [
|
input = [
|
||||||
[ id:'test', single_end:false ], // meta map
|
[ id:'test', single_end:false ], // meta map
|
||||||
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true) ]
|
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
|
||||||
|
]
|
||||||
|
|
||||||
BAMTOOLS_SPLIT ( input )
|
BAMTOOLS_SPLIT_SINGLE ( input )
|
||||||
}
|
}
|
||||||
|
|
||||||
|
workflow test_bamtools_split_multiple {
|
||||||
|
|
||||||
|
input = [
|
||||||
|
[ id:'test', single_end:false ], // meta map
|
||||||
|
[
|
||||||
|
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
|
||||||
|
file(params.test_data['homo_sapiens']['illumina']['test2_paired_end_sorted_bam'], checkIfExists: true)
|
||||||
|
]
|
||||||
|
]
|
||||||
|
|
||||||
|
BAMTOOLS_SPLIT_MULTIPLE ( input )
|
||||||
|
}
|
||||||
|
|
||||||
|
|
|
@ -1,10 +1,23 @@
|
||||||
- name: bamtools split test_bamtools_split
|
- name: bamtools split test_bamtools_split_single_input
|
||||||
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
|
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split_single_input -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
|
||||||
tags:
|
tags:
|
||||||
- bamtools/split
|
|
||||||
- bamtools
|
- bamtools
|
||||||
|
- bamtools/split
|
||||||
files:
|
files:
|
||||||
- path: output/bamtools/test.paired_end.sorted.REF_chr22.bam
|
- path: output/bamtools/test.REF_chr22.bam
|
||||||
md5sum: b7dc50e0edf9c6bfc2e3b0e6d074dc07
|
md5sum: b7dc50e0edf9c6bfc2e3b0e6d074dc07
|
||||||
- path: output/bamtools/test.paired_end.sorted.REF_unmapped.bam
|
- path: output/bamtools/test.REF_unmapped.bam
|
||||||
md5sum: e0754bf72c51543b2d745d96537035fb
|
md5sum: e0754bf72c51543b2d745d96537035fb
|
||||||
|
- path: output/bamtools/versions.yml
|
||||||
|
|
||||||
|
- name: bamtools split test_bamtools_split_multiple
|
||||||
|
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split_multiple -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
|
||||||
|
tags:
|
||||||
|
- bamtools
|
||||||
|
- bamtools/split
|
||||||
|
files:
|
||||||
|
- path: output/bamtools/test.REF_chr22.bam
|
||||||
|
md5sum: 585675bea34c48ebe9db06a561d4b4fa
|
||||||
|
- path: output/bamtools/test.REF_unmapped.bam
|
||||||
|
md5sum: 16ad644c87b9471f3026bc87c98b4963
|
||||||
|
- path: output/bamtools/versions.yml
|
||||||
|
|
|
@ -7,9 +7,22 @@ include { DIAMOND_BLASTP } from '../../../../modules/diamond/blastp/main.nf'
|
||||||
|
|
||||||
workflow test_diamond_blastp {
|
workflow test_diamond_blastp {
|
||||||
|
|
||||||
db = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
|
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||||
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
|
fasta = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||||
|
out_ext = 'txt'
|
||||||
|
blast_columns = 'qseqid qlen'
|
||||||
|
|
||||||
DIAMOND_MAKEDB ( db )
|
DIAMOND_MAKEDB ( db )
|
||||||
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db )
|
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
|
||||||
|
}
|
||||||
|
|
||||||
|
workflow test_diamond_blastp_daa {
|
||||||
|
|
||||||
|
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||||
|
fasta = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||||
|
out_ext = 'daa'
|
||||||
|
blast_columns = []
|
||||||
|
|
||||||
|
DIAMOND_MAKEDB ( db )
|
||||||
|
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,8 +1,17 @@
|
||||||
- name: diamond blastp
|
- name: diamond blastp test_diamond_blastp
|
||||||
command: nextflow run ./tests/modules/diamond/blastp -entry test_diamond_blastp -c ./tests/config/nextflow.config -c ./tests/modules/diamond/blastp/nextflow.config
|
command: nextflow run tests/modules/diamond/blastp -entry test_diamond_blastp -c tests/config/nextflow.config
|
||||||
tags:
|
tags:
|
||||||
- diamond
|
|
||||||
- diamond/blastp
|
- diamond/blastp
|
||||||
|
- diamond
|
||||||
files:
|
files:
|
||||||
- path: ./output/diamond/test.diamond_blastp.txt
|
- path: output/diamond/test.diamond_blastp.txt
|
||||||
md5sum: 3ca7f6290c1d8741c573370e6f8b4db0
|
- path: output/diamond/versions.yml
|
||||||
|
|
||||||
|
- name: diamond blastp test_diamond_blastp_daa
|
||||||
|
command: nextflow run tests/modules/diamond/blastp -entry test_diamond_blastp_daa -c tests/config/nextflow.config
|
||||||
|
tags:
|
||||||
|
- diamond/blastp
|
||||||
|
- diamond
|
||||||
|
files:
|
||||||
|
- path: output/diamond/test.diamond_blastp.daa
|
||||||
|
- path: output/diamond/versions.yml
|
||||||
|
|
|
@ -7,9 +7,22 @@ include { DIAMOND_BLASTX } from '../../../../modules/diamond/blastx/main.nf'
|
||||||
|
|
||||||
workflow test_diamond_blastx {
|
workflow test_diamond_blastx {
|
||||||
|
|
||||||
db = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
|
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||||
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
|
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
|
||||||
|
out_ext = 'tfdfdt' // Nonsense file extension to check default case.
|
||||||
|
blast_columns = 'qseqid qlen'
|
||||||
|
|
||||||
DIAMOND_MAKEDB ( db )
|
DIAMOND_MAKEDB ( db )
|
||||||
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db )
|
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
|
||||||
|
}
|
||||||
|
|
||||||
|
workflow test_diamond_blastx_daa {
|
||||||
|
|
||||||
|
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||||
|
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
|
||||||
|
out_ext = 'daa'
|
||||||
|
blast_columns = []
|
||||||
|
|
||||||
|
DIAMOND_MAKEDB ( db )
|
||||||
|
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,8 +1,18 @@
|
||||||
- name: diamond blastx
|
- name: diamond blastx test_diamond_blastx
|
||||||
command: nextflow run ./tests/modules/diamond/blastx -entry test_diamond_blastx -c ./tests/config/nextflow.config -c ./tests/modules/diamond/blastx/nextflow.config
|
command: nextflow run tests/modules/diamond/blastx -entry test_diamond_blastx -c tests/config/nextflow.config
|
||||||
tags:
|
tags:
|
||||||
- diamond
|
- diamond
|
||||||
- diamond/blastx
|
- diamond/blastx
|
||||||
files:
|
files:
|
||||||
- path: ./output/diamond/test.diamond_blastx.txt
|
- path: output/diamond/test.diamond_blastx.txt
|
||||||
md5sum: d41d8cd98f00b204e9800998ecf8427e
|
- path: output/diamond/versions.yml
|
||||||
|
|
||||||
|
- name: diamond blastx test_diamond_blastx_daa
|
||||||
|
command: nextflow run tests/modules/diamond/blastx -entry test_diamond_blastx_daa -c tests/config/nextflow.config
|
||||||
|
tags:
|
||||||
|
- diamond
|
||||||
|
- diamond/blastx
|
||||||
|
files:
|
||||||
|
- path: output/diamond/test.diamond_blastx.daa
|
||||||
|
md5sum: 0df4a833408416f32981415873facc11
|
||||||
|
- path: output/diamond/versions.yml
|
||||||
|
|
|
@ -6,7 +6,7 @@ include { DIAMOND_MAKEDB } from '../../../../modules/diamond/makedb/main.nf'
|
||||||
|
|
||||||
workflow test_diamond_makedb {
|
workflow test_diamond_makedb {
|
||||||
|
|
||||||
input = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
|
input = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
|
||||||
|
|
||||||
DIAMOND_MAKEDB ( input )
|
DIAMOND_MAKEDB ( input )
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,8 +1,9 @@
|
||||||
- name: diamond makedb test_diamond_makedb
|
- name: diamond makedb test_diamond_makedb
|
||||||
command: nextflow run ./tests/modules/diamond/makedb -entry test_diamond_makedb -c ./tests/config/nextflow.config -c ./tests/modules/diamond/makedb/nextflow.config
|
command: nextflow run tests/modules/diamond/makedb -entry test_diamond_makedb -c tests/config/nextflow.config
|
||||||
tags:
|
tags:
|
||||||
- diamond
|
|
||||||
- diamond/makedb
|
- diamond/makedb
|
||||||
|
- diamond
|
||||||
files:
|
files:
|
||||||
- path: output/diamond/genome.fasta.dmnd
|
- path: output/diamond/proteome.fasta.dmnd
|
||||||
md5sum: 2447fb376394c20d43ea3aad2aa5d15d
|
md5sum: fc28c50b202dd7a7c5451cddff2ba1f4
|
||||||
|
- path: output/diamond/versions.yml
|
||||||
|
|
17
tests/modules/elprep/merge/main.nf
Normal file
17
tests/modules/elprep/merge/main.nf
Normal file
|
@ -0,0 +1,17 @@
|
||||||
|
#!/usr/bin/env nextflow
|
||||||
|
|
||||||
|
nextflow.enable.dsl = 2
|
||||||
|
|
||||||
|
include { ELPREP_SPLIT } from '../../../../modules/elprep/split/main.nf'
|
||||||
|
include { ELPREP_MERGE } from '../../../../modules/elprep/merge/main.nf'
|
||||||
|
|
||||||
|
workflow test_elprep_merge {
|
||||||
|
|
||||||
|
input = [
|
||||||
|
[ id:'test', single_end:false ], // meta map
|
||||||
|
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
|
||||||
|
]
|
||||||
|
|
||||||
|
ELPREP_SPLIT ( input )
|
||||||
|
ELPREP_MERGE ( ELPREP_SPLIT.out.bam )
|
||||||
|
}
|
5
tests/modules/elprep/merge/nextflow.config
Normal file
5
tests/modules/elprep/merge/nextflow.config
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
process {
|
||||||
|
withName : ELPREP_MERGE {
|
||||||
|
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
||||||
|
}
|
||||||
|
}
|
8
tests/modules/elprep/merge/test.yml
Normal file
8
tests/modules/elprep/merge/test.yml
Normal file
|
@ -0,0 +1,8 @@
|
||||||
|
- name: elprep merge test_elprep_merge
|
||||||
|
command: nextflow run tests/modules/elprep/merge -entry test_elprep_merge -c tests/config/nextflow.config
|
||||||
|
tags:
|
||||||
|
- elprep
|
||||||
|
- elprep/merge
|
||||||
|
files:
|
||||||
|
- path: output/elprep/output/test.bam
|
||||||
|
- path: output/elprep/versions.yml
|
|
@ -7,4 +7,3 @@
|
||||||
- path: output/picard/test.bam
|
- path: output/picard/test.bam
|
||||||
md5sum: 7b82f3461c2d80fc6a10385e78c9427f
|
md5sum: 7b82f3461c2d80fc6a10385e78c9427f
|
||||||
- path: output/picard/versions.yml
|
- path: output/picard/versions.yml
|
||||||
md5sum: 8a2d176295e1343146ea433c79bb517f
|
|
||||||
|
|
|
@ -7,4 +7,3 @@
|
||||||
- path: output/picard/test.bam
|
- path: output/picard/test.bam
|
||||||
md5sum: a48f8e77a1480445efc57570c3a38a68
|
md5sum: a48f8e77a1480445efc57570c3a38a68
|
||||||
- path: output/picard/versions.yml
|
- path: output/picard/versions.yml
|
||||||
md5sum: e6457d7c6de51bf6f4b577eda65e57ac
|
|
||||||
|
|
|
@ -6,8 +6,7 @@ include { PICARD_COLLECTWGSMETRICS } from '../../../../modules/picard/collectwgs
|
||||||
|
|
||||||
workflow test_picard_collectwgsmetrics {
|
workflow test_picard_collectwgsmetrics {
|
||||||
input = [ [ id:'test', single_end:false ], // meta map
|
input = [ [ id:'test', single_end:false ], // meta map
|
||||||
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
|
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
|
||||||
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam_bai'], checkIfExists: true)
|
|
||||||
]
|
]
|
||||||
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
|
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
|
||||||
|
|
||||||
|
|
|
@ -7,4 +7,3 @@
|
||||||
- path: output/picard/test.dict
|
- path: output/picard/test.dict
|
||||||
contains: ["SN:MT192765.1"]
|
contains: ["SN:MT192765.1"]
|
||||||
- path: output/picard/versions.yml
|
- path: output/picard/versions.yml
|
||||||
md5sum: b3d8c7ea65b8a6d3237b153d13fe2014
|
|
||||||
|
|
|
@ -7,4 +7,3 @@
|
||||||
- path: output/picard/test.bam
|
- path: output/picard/test.bam
|
||||||
md5sum: 746102e8c242c0ef42e045c49d320030
|
md5sum: 746102e8c242c0ef42e045c49d320030
|
||||||
- path: output/picard/versions.yml
|
- path: output/picard/versions.yml
|
||||||
md5sum: 4329ba7cdca8f4f6018dfd5c019ba2eb
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
process {
|
process {
|
||||||
ext.args = "WARN_ON_MISSING_CONTIG=true"
|
ext.args = "--WARN_ON_MISSING_CONTIG true"
|
||||||
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -3,7 +3,7 @@ process {
|
||||||
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
||||||
|
|
||||||
withName: PICARD_MARKDUPLICATES_UNSORTED {
|
withName: PICARD_MARKDUPLICATES_UNSORTED {
|
||||||
ext.args = 'ASSUME_SORT_ORDER=queryname'
|
ext.args = '--ASSUME_SORT_ORDER queryname'
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
27
tests/subworkflows/nf-core/bam_qc_picard/main.nf
Normal file
27
tests/subworkflows/nf-core/bam_qc_picard/main.nf
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
#!/usr/bin/env nextflow
|
||||||
|
|
||||||
|
nextflow.enable.dsl = 2
|
||||||
|
|
||||||
|
include { BAM_QC_PICARD } from '../../../../subworkflows/nf-core/bam_qc_picard/main' addParams([:])
|
||||||
|
|
||||||
|
workflow test_bam_qc_picard_wgs {
|
||||||
|
input = [ [ id:'test', single_end:false ], // meta map
|
||||||
|
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
|
||||||
|
]
|
||||||
|
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
|
||||||
|
fasta_fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)
|
||||||
|
|
||||||
|
BAM_QC_PICARD ( input, fasta, fasta_fai, [], [] )
|
||||||
|
}
|
||||||
|
|
||||||
|
workflow test_bam_qc_picard_targetted {
|
||||||
|
input = [ [ id:'test', single_end:false ], // meta map
|
||||||
|
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
|
||||||
|
]
|
||||||
|
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
|
||||||
|
fasta_fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)
|
||||||
|
bait = file(params.test_data['sarscov2']['genome']['baits_interval_list'], checkIfExists: true)
|
||||||
|
target = file(params.test_data['sarscov2']['genome']['targets_interval_list'], checkIfExists: true)
|
||||||
|
|
||||||
|
BAM_QC_PICARD ( input, fasta, fasta_fai, bait, target )
|
||||||
|
}
|
5
tests/subworkflows/nf-core/bam_qc_picard/nextflow.config
Normal file
5
tests/subworkflows/nf-core/bam_qc_picard/nextflow.config
Normal file
|
@ -0,0 +1,5 @@
|
||||||
|
process {
|
||||||
|
|
||||||
|
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
|
||||||
|
|
||||||
|
}
|
33
tests/subworkflows/nf-core/bam_qc_picard/test.yml
Normal file
33
tests/subworkflows/nf-core/bam_qc_picard/test.yml
Normal file
|
@ -0,0 +1,33 @@
|
||||||
|
- name: bam qc picard wgs
|
||||||
|
command: nextflow run ./tests/subworkflows/nf-core/bam_qc_picard -entry test_bam_qc_picard_wgs -c tests/config/nextflow.config
|
||||||
|
tags:
|
||||||
|
- subworkflows
|
||||||
|
# - subworkflows/bam_qc_picard
|
||||||
|
# Modules
|
||||||
|
# - picard
|
||||||
|
# - picard/collectmultiplemetrics
|
||||||
|
# - picard/collectwgsmetrics
|
||||||
|
files:
|
||||||
|
- path: ./output/picard/test.CollectMultipleMetrics.alignment_summary_metrics
|
||||||
|
- path: ./output/picard/test.CollectMultipleMetrics.insert_size_metrics
|
||||||
|
- path: ./output/picard/test.CollectMultipleMetrics.base_distribution_by_cycle_metrics
|
||||||
|
- path: ./output/picard/test.CollectMultipleMetrics.quality_by_cycle_metrics
|
||||||
|
- path: ./output/picard/test.CollectMultipleMetrics.quality_distribution_metrics
|
||||||
|
- path: ./output/picard/test.CollectWgsMetrics.coverage_metrics
|
||||||
|
|
||||||
|
- name: bam qc picard targetted
|
||||||
|
command: nextflow run ./tests/subworkflows/nf-core/bam_qc_picard -entry test_bam_qc_picard_targetted -c tests/config/nextflow.config
|
||||||
|
tags:
|
||||||
|
- subworkflows
|
||||||
|
# - subworkflows/bam_qc_picard
|
||||||
|
# Modules
|
||||||
|
# - picard
|
||||||
|
# - picard/collectmultiplemetrics
|
||||||
|
# - picard/collecthsmetrics
|
||||||
|
files:
|
||||||
|
- path: ./output/picard/test.CollectMultipleMetrics.alignment_summary_metrics
|
||||||
|
- path: ./output/picard/test.CollectMultipleMetrics.insert_size_metrics
|
||||||
|
- path: ./output/picard/test.CollectMultipleMetrics.base_distribution_by_cycle_metrics
|
||||||
|
- path: ./output/picard/test.CollectMultipleMetrics.quality_by_cycle_metrics
|
||||||
|
- path: ./output/picard/test.CollectMultipleMetrics.quality_distribution_metrics
|
||||||
|
- path: ./output/picard/test.CollectHsMetrics.coverage_metrics
|
Loading…
Reference in a new issue