Merge branch 'antismashlite' of github.com:jasmezz/modules into antismashlite

This commit is contained in:
jasmezz 2022-05-02 17:56:23 +02:00
commit d19e65d2a2
91 changed files with 1194 additions and 326 deletions

View file

@ -1,64 +0,0 @@
---
name: Bug report
about: Report something that is broken or incorrect
title: "[BUG]"
---
<!--
# nf-core/module bug report
Hi there!
Thanks for telling us about a problem with the modules.
Please delete this text and anything that's not relevant from the template below:
-->
## Check Documentation
I have checked the following places for your error:
- [ ] [nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)
- [ ] [nf-core/module documentation](https://github.com/nf-core/modules/blob/master/README.md)
## Description of the bug
<!-- A clear and concise description of what the bug is. -->
## Steps to reproduce
Steps to reproduce the behaviour:
1. Command line: <!-- [e.g. `nextflow run ...`] -->
2. See error: <!-- [Please provide your error message] -->
## Expected behaviour
<!-- A clear and concise description of what you expected to happen. -->
## Log files
Have you provided the following extra information/files:
- [ ] The command used to run the module
- [ ] The `.nextflow.log` file <!-- this is a hidden file in the directory where you launched the module -->
## System
- Hardware: <!-- [e.g. HPC, Desktop, Cloud...] -->
- Executor: <!-- [e.g. slurm, local, awsbatch...] -->
- OS: <!-- [e.g. CentOS Linux, macOS, Linux Mint...] -->
- Version <!-- [e.g. 7, 10.13.6, 18.3...] -->
## Nextflow Installation
- Version: <!-- [e.g. 19.10.0] -->
## Container engine
- Engine: <!-- [e.g. Conda, Docker, Singularity or Podman] -->
- version: <!-- [e.g. 1.0.0] -->
- Image tag: <!-- [e.g. nfcore/module:2.6] -->
## Additional context
<!-- Add any other context about the problem here. -->

52
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View file

@ -0,0 +1,52 @@
name: Bug report
description: Report something that is broken or incorrect
labels: bug
body:
- type: checkboxes
attributes:
label: Have you checked the docs?
description: I have checked the following places for my error
options:
- label: "[nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)"
required: true
- label: "[nf-core modules documentation](https://nf-co.re/docs/contributing/modules)"
required: true
- type: textarea
id: description
attributes:
label: Description of the bug
description: A clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: command_used
attributes:
label: Command used and terminal output
description: Steps to reproduce the behaviour. Please paste the command you used to launch the pipeline and the output from your terminal.
render: console
placeholder: |
$ nextflow run ...
Some output where something broke
- type: textarea
id: files
attributes:
label: Relevant files
description: |
Please drag and drop the relevant files here. Create a `.zip` archive if the extension is not allowed.
Your verbose log file `.nextflow.log` is often useful _(this is a hidden file in the directory where you launched the pipeline)_ as well as custom Nextflow configuration files.
- type: textarea
id: system
attributes:
label: System information
description: |
* Nextflow version _(eg. 21.10.3)_
* Hardware _(eg. HPC, Desktop, Cloud)_
* Executor _(eg. slurm, local, awsbatch)_
* Container engine and version: _(e.g. Docker 1.0.0, Singularity, Conda, Podman, Shifter or Charliecloud)_
* OS and version: _(eg. CentOS Linux, macOS, Ubuntu 22.04)_
* Image tag: <!-- [e.g. nfcore/cellranger:2.6] -->

View file

@ -1,32 +0,0 @@
---
name: Feature request
about: Suggest an idea for nf-core/modules
title: "[FEATURE]"
---
<!--
# nf-core/modules feature request
Hi there!
Thanks for suggesting a new feature for the modules!
Please delete this text and anything that's not relevant from the template below:
-->
## Is your feature request related to a problem? Please describe
<!-- A clear and concise description of what the problem is. -->
<!-- e.g. [I'm always frustrated when ...] -->
## Describe the solution you'd like
<!-- A clear and concise description of what you want to happen. -->
## Describe alternatives you've considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional context
<!-- Add any other context about the feature request here. -->

View file

@ -0,0 +1,32 @@
name: Feature request
description: Suggest an idea for nf-core/modules
labels: feature
title: "[FEATURE]"
body:
- type: textarea
id: description
attributes:
label: Is your feature request related to a problem? Please describe
description: A clear and concise description of what the bug is.
placeholder: |
<!-- e.g. [I'm always frustrated when ...] -->
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of the solution you want to happen.
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
- type: textarea
id: additional_context
attributes:
label: Additional context
description: Add any other context about the feature request here.

View file

@ -1,26 +0,0 @@
---
name: New module
about: Suggest a new module for nf-core/modules
title: "new module: TOOL/SUBTOOL"
label: new module
---
<!--
# nf-core/modules new module suggestion
Hi there!
Thanks for suggesting a new module for the modules!
Please delete this text and anything that's not relevant from the template below:
Replace TOOL with the bioconda name for the tool in the following text, so that the link is functional.
Replace TOOL/SUBTOOL in the issue title so that it's understandable.
-->
I think it would be good to have a module for [TOOL](https://bioconda.github.io/recipes/TOOL/README.html)
- [ ] This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
- [ ] There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
- [ ] There is no [open issue](https://github.com/nf-core/modules/issues) for this module
- [ ] If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module

36
.github/ISSUE_TEMPLATE/new_module.yml vendored Normal file
View file

@ -0,0 +1,36 @@
name: New module
description: Suggest a new module for nf-core/modules
title: "new module: TOOL/SUBTOOL"
labels: new module
body:
- type: checkboxes
attributes:
label: Is there an existing module for this?
description: This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
options:
- label: I have searched for the existing module
required: true
- type: checkboxes
attributes:
label: Is there an open PR for this?
description: There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
options:
- label: I have searched for existing PRs
required: true
- type: checkboxes
attributes:
label: Is there an open issue for this?
description: There is no [open issue](https://github.com/nf-core/modules/issues) for this module
options:
- label: I have searched for existing issues
required: true
- type: checkboxes
attributes:
label: Are you going to work on this?
description: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
options:
- label: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
required: false

View file

@ -7,8 +7,9 @@ process ANTISMASH_ANTISMASHLITEDOWNLOADDATABASES {
'quay.io/biocontainers/antismash-lite:6.0.1--pyhdfd78af_1' }" 'quay.io/biocontainers/antismash-lite:6.0.1--pyhdfd78af_1' }"
/* /*
These files are normally downloaded by download-antismash-databases itself, and must be retrieved for input by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines. This is solely for use for CI tests of the nf-core/module version of antiSMASH. These files are normally downloaded/created by download-antismash-databases itself, and must be retrieved for input by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines. This is solely for use for CI tests of the nf-core/module version of antiSMASH.
Reason: Upon execution, the tool checks if certain database files are present within the container and if not, it tries to create them in /usr/local/bin, for which only root user has write permissions. Mounting those database files with this module prevents the tool from trying to create them. Reason: Upon execution, the tool checks if certain database files are present within the container and if not, it tries to create them in /usr/local/bin, for which only root user has write permissions. Mounting those database files with this module prevents the tool from trying to create them.
These files are also emitted as output channels in this module to enable the antismash-lite module to use them as mount volumes to the docker/singularity containers.
*/ */
containerOptions { containerOptions {
@ -26,6 +27,9 @@ process ANTISMASH_ANTISMASHLITEDOWNLOADDATABASES {
output: output:
path("antismash_db") , emit: database path("antismash_db") , emit: database
path("css"), emit: css_dir
path("detection"), emit: detection_dir
path("modules"), emit: modules_dir
path "versions.yml", emit: versions path "versions.yml", emit: versions
when: when:
@ -40,7 +44,7 @@ process ANTISMASH_ANTISMASHLITEDOWNLOADDATABASES {
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":
antismash: \$(antismash --version | sed 's/antiSMASH //') antismash-lite: \$(antismash --version | sed 's/antiSMASH //')
END_VERSIONS END_VERSIONS
""" """
} }

View file

@ -27,17 +27,17 @@ input:
- database_css: - database_css:
type: directory type: directory
description: | description: |
antismash/outputs/html/css folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the use by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines. antismash/outputs/html/css folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
pattern: "css" pattern: "css"
- database_detection: - database_detection:
type: directory type: directory
description: | description: |
antismash/detection folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the use by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines. antismash/detection folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
pattern: "detection" pattern: "detection"
- database_modules: - database_modules:
type: directory type: directory
description: | description: |
antismash/modules folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the use by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines. antismash/modules folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
pattern: "modules" pattern: "modules"
output: output:
@ -50,6 +50,21 @@ output:
type: directory type: directory
description: Download directory for antiSMASH databases description: Download directory for antiSMASH databases
pattern: "antismash_db" pattern: "antismash_db"
- css_dir:
type: directory
description: |
antismash/outputs/html/css folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
pattern: "css"
- detection_dir:
type: directory
description: |
antismash/detection folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
pattern: "detection"
- modules_dir:
type: directory
description: |
antismash/modules folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
pattern: "modules"
authors: authors:
- "@jasmezz" - "@jasmezz"

View file

@ -2,10 +2,10 @@ process BAMTOOLS_SPLIT {
tag "$meta.id" tag "$meta.id"
label 'process_low' label 'process_low'
conda (params.enable_conda ? "bioconda::bamtools=2.5.1" : null) conda (params.enable_conda ? "bioconda::bamtools=2.5.2" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/bamtools:2.5.1--h9a82719_9' : 'https://depot.galaxyproject.org/singularity/bamtools:2.5.2--hd03093a_0' :
'quay.io/biocontainers/bamtools:2.5.1--h9a82719_9' }" 'quay.io/biocontainers/bamtools:2.5.2--hd03093a_0' }"
input: input:
tuple val(meta), path(bam) tuple val(meta), path(bam)
@ -20,11 +20,15 @@ process BAMTOOLS_SPLIT {
script: script:
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}" def prefix = task.ext.prefix ?: "${meta.id}"
def input_list = bam.collect{"-in $it"}.join(' ')
""" """
bamtools \\ bamtools \\
split \\ merge \\
-in $bam \\ $input_list \\
$args | bamtools \\
split \\
-stub $prefix \\
$args
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -23,7 +23,7 @@ input:
e.g. [ id:'test', single_end:false ] e.g. [ id:'test', single_end:false ]
- bam: - bam:
type: file type: file
description: A BAM file to split description: A list of one or more BAM files to merge and then split
pattern: "*.bam" pattern: "*.bam"
output: output:
@ -43,3 +43,4 @@ output:
authors: authors:
- "@sguizard" - "@sguizard"
- "@matthdsm"

View file

@ -2,10 +2,10 @@ process CUSTOM_GETCHROMSIZES {
tag "$fasta" tag "$fasta"
label 'process_low' label 'process_low'
conda (params.enable_conda ? "bioconda::samtools=1.15" : null) conda (params.enable_conda ? "bioconda::samtools=1.15.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/samtools:1.15--h1170115_1' : 'https://depot.galaxyproject.org/singularity/samtools:1.15.1--h1170115_0' :
'quay.io/biocontainers/samtools:1.15--h1170115_1' }" 'quay.io/biocontainers/samtools:1.15.1--h1170115_0' }"
input: input:
path fasta path fasta

View file

@ -2,20 +2,26 @@ process DIAMOND_BLASTP {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
// Dimaond is limited to v2.0.9 because there is not a conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
// singularity version higher than this at the current time.
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' : 'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }" 'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
input: input:
tuple val(meta), path(fasta) tuple val(meta), path(fasta)
path db path db
val out_ext
val blast_columns
output: output:
tuple val(meta), path('*.txt'), emit: txt tuple val(meta), path('*.blast'), optional: true, emit: blast
path "versions.yml" , emit: versions tuple val(meta), path('*.xml') , optional: true, emit: xml
tuple val(meta), path('*.txt') , optional: true, emit: txt
tuple val(meta), path('*.daa') , optional: true, emit: daa
tuple val(meta), path('*.sam') , optional: true, emit: sam
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
tuple val(meta), path('*.paf') , optional: true, emit: paf
path "versions.yml" , emit: versions
when: when:
task.ext.when == null || task.ext.when task.ext.when == null || task.ext.when
@ -23,6 +29,21 @@ process DIAMOND_BLASTP {
script: script:
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}" def prefix = task.ext.prefix ?: "${meta.id}"
def columns = blast_columns ? "${blast_columns}" : ''
switch ( out_ext ) {
case "blast": outfmt = 0; break
case "xml": outfmt = 5; break
case "txt": outfmt = 6; break
case "daa": outfmt = 100; break
case "sam": outfmt = 101; break
case "tsv": outfmt = 102; break
case "paf": outfmt = 103; break
default:
outfmt = '6';
out_ext = 'txt';
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
break
}
""" """
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'` DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
@ -31,8 +52,9 @@ process DIAMOND_BLASTP {
--threads $task.cpus \\ --threads $task.cpus \\
--db \$DB \\ --db \$DB \\
--query $fasta \\ --query $fasta \\
--outfmt ${outfmt} ${columns} \\
$args \\ $args \\
--out ${prefix}.txt --out ${prefix}.${out_ext}
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -28,12 +28,50 @@ input:
type: directory type: directory
description: Directory containing the protein blast database description: Directory containing the protein blast database
pattern: "*" pattern: "*"
- out_ext:
type: string
description: |
Specify the type of output file to be generated. `blast` corresponds to
BLAST pairwise format. `xml` corresponds to BLAST xml format.
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
taxonomic classification format.
pattern: "blast|xml|txt|daa|sam|tsv|paf"
- blast_columns:
type: string
description: |
Optional space separated list of DIAMOND tabular BLAST output keywords
used for in conjunction with the 'txt' out_ext option (--outfmt 6). See
DIAMOND documnetation for more information.
output: output:
- txt: - blast:
type: file type: file
description: File containing blastp hits description: File containing blastp hits
pattern: "*.{blastp.txt}" pattern: "*.{blast}"
- xml:
type: file
description: File containing blastp hits
pattern: "*.{xml}"
- txt:
type: file
description: File containing hits in tabular BLAST format.
pattern: "*.{txt}"
- daa:
type: file
description: File containing hits DAA format
pattern: "*.{daa}"
- sam:
type: file
description: File containing aligned reads in SAM format
pattern: "*.{sam}"
- tsv:
type: file
description: Tab separated file containing taxonomic classification of hits
pattern: "*.{tsv}"
- paf:
type: file
description: File containing aligned reads in pairwise mapping format format
pattern: "*.{paf}"
- versions: - versions:
type: file type: file
description: File containing software versions description: File containing software versions
@ -41,3 +79,4 @@ output:
authors: authors:
- "@spficklin" - "@spficklin"
- "@jfy133"

View file

@ -2,20 +2,26 @@ process DIAMOND_BLASTX {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
// Dimaond is limited to v2.0.9 because there is not a conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
// singularity version higher than this at the current time.
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' : 'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }" 'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
input: input:
tuple val(meta), path(fasta) tuple val(meta), path(fasta)
path db path db
val out_ext
val blast_columns
output: output:
tuple val(meta), path('*.txt'), emit: txt tuple val(meta), path('*.blast'), optional: true, emit: blast
path "versions.yml" , emit: versions tuple val(meta), path('*.xml') , optional: true, emit: xml
tuple val(meta), path('*.txt') , optional: true, emit: txt
tuple val(meta), path('*.daa') , optional: true, emit: daa
tuple val(meta), path('*.sam') , optional: true, emit: sam
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
tuple val(meta), path('*.paf') , optional: true, emit: paf
path "versions.yml" , emit: versions
when: when:
task.ext.when == null || task.ext.when task.ext.when == null || task.ext.when
@ -23,6 +29,21 @@ process DIAMOND_BLASTX {
script: script:
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}" def prefix = task.ext.prefix ?: "${meta.id}"
def columns = blast_columns ? "${blast_columns}" : ''
switch ( out_ext ) {
case "blast": outfmt = 0; break
case "xml": outfmt = 5; break
case "txt": outfmt = 6; break
case "daa": outfmt = 100; break
case "sam": outfmt = 101; break
case "tsv": outfmt = 102; break
case "paf": outfmt = 103; break
default:
outfmt = '6';
out_ext = 'txt';
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
break
}
""" """
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'` DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
@ -31,8 +52,9 @@ process DIAMOND_BLASTX {
--threads $task.cpus \\ --threads $task.cpus \\
--db \$DB \\ --db \$DB \\
--query $fasta \\ --query $fasta \\
--outfmt ${outfmt} ${columns} \\
$args \\ $args \\
--out ${prefix}.txt --out ${prefix}.${out_ext}
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -28,12 +28,44 @@ input:
type: directory type: directory
description: Directory containing the nucelotide blast database description: Directory containing the nucelotide blast database
pattern: "*" pattern: "*"
- out_ext:
type: string
description: |
Specify the type of output file to be generated. `blast` corresponds to
BLAST pairwise format. `xml` corresponds to BLAST xml format.
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
taxonomic classification format.
pattern: "blast|xml|txt|daa|sam|tsv|paf"
output: output:
- blast:
type: file
description: File containing blastp hits
pattern: "*.{blast}"
- xml:
type: file
description: File containing blastp hits
pattern: "*.{xml}"
- txt: - txt:
type: file type: file
description: File containing blastx hits description: File containing hits in tabular BLAST format.
pattern: "*.{blastx.txt}" pattern: "*.{txt}"
- daa:
type: file
description: File containing hits DAA format
pattern: "*.{daa}"
- sam:
type: file
description: File containing aligned reads in SAM format
pattern: "*.{sam}"
- tsv:
type: file
description: Tab separated file containing taxonomic classification of hits
pattern: "*.{tsv}"
- paf:
type: file
description: File containing aligned reads in pairwise mapping format format
pattern: "*.{paf}"
- versions: - versions:
type: file type: file
description: File containing software versions description: File containing software versions
@ -41,3 +73,4 @@ output:
authors: authors:
- "@spficklin" - "@spficklin"
- "@jfy133"

View file

@ -2,12 +2,10 @@ process DIAMOND_MAKEDB {
tag "$fasta" tag "$fasta"
label 'process_medium' label 'process_medium'
// Dimaond is limited to v2.0.9 because there is not a conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
// singularity version higher than this at the current time.
conda (params.enable_conda ? 'bioconda::diamond=2.0.9' : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' : 'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }" 'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
input: input:
path fasta path fasta

View file

@ -0,0 +1,43 @@
process ELPREP_MERGE {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::elprep=5.1.2" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/elprep:5.1.2--he881be0_0':
'quay.io/biocontainers/elprep:5.1.2--he881be0_0' }"
input:
tuple val(meta), path(bam)
output:
tuple val(meta), path("output/**.{bam,sam}") , emit: bam
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def suffix = args.contains("--output-type sam") ? "sam" : "bam"
def single_end = meta.single_end ? " --single-end" : ""
"""
# create directory and move all input so elprep can find and merge them before splitting
mkdir input
mv ${bam} input/
elprep merge \\
input/ \\
output/${prefix}.${suffix} \\
$args \\
${single_end} \\
--nr-of-threads $task.cpus
cat <<-END_VERSIONS > versions.yml
"${task.process}":
elprep: \$(elprep 2>&1 | head -n2 | tail -n1 |sed 's/^.*version //;s/ compiled.*\$//')
END_VERSIONS
"""
}

View file

@ -0,0 +1,44 @@
name: "elprep_merge"
description: Merge split bam/sam chunks in one file
keywords:
- bam
- sam
- merge
tools:
- "elprep":
description: "elPrep is a high-performance tool for preparing .sam/.bam files for variant calling in sequencing pipelines. It can be used as a drop-in replacement for SAMtools/Picard/GATK4."
homepage: "https://github.com/ExaScience/elprep"
documentation: "https://github.com/ExaScience/elprep"
tool_dev_url: "https://github.com/ExaScience/elprep"
doi: "10.1371/journal.pone.0244471"
licence: "['AGPL v3']"
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- bam:
type: file
description: List of BAM/SAM chunks to merge
pattern: "*.{bam,sam}"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
#
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- bam:
type: file
description: Merged BAM/SAM file
pattern: "*.{bam,sam}"
authors:
- "@matthdsm"

View file

@ -17,7 +17,7 @@ process GATK4_HAPLOTYPECALLER {
output: output:
tuple val(meta), path("*.vcf.gz"), emit: vcf tuple val(meta), path("*.vcf.gz"), emit: vcf
tuple val(meta), path("*.tbi") , emit: tbi tuple val(meta), path("*.tbi") , optional:true, emit: tbi
path "versions.yml" , emit: versions path "versions.yml" , emit: versions
when: when:

View file

@ -12,7 +12,7 @@ process GATK4_MARKDUPLICATES {
output: output:
tuple val(meta), path("*.bam") , emit: bam tuple val(meta), path("*.bam") , emit: bam
tuple val(meta), path("*.bai") , emit: bai tuple val(meta), path("*.bai") , optional:true, emit: bai
tuple val(meta), path("*.metrics"), emit: metrics tuple val(meta), path("*.metrics"), emit: metrics
path "versions.yml" , emit: versions path "versions.yml" , emit: versions

View file

@ -8,7 +8,7 @@ process GATK4_SPLITNCIGARREADS {
'quay.io/biocontainers/gatk4:4.2.5.0--hdfd78af_0' }" 'quay.io/biocontainers/gatk4:4.2.5.0--hdfd78af_0' }"
input: input:
tuple val(meta), path(bam) tuple val(meta), path(bam), path(bai), path(intervals)
path fasta path fasta
path fai path fai
path dict path dict
@ -23,6 +23,7 @@ process GATK4_SPLITNCIGARREADS {
script: script:
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}" def prefix = task.ext.prefix ?: "${meta.id}"
def interval_command = intervals ? "--intervals $intervals" : ""
def avail_mem = 3 def avail_mem = 3
if (!task.memory) { if (!task.memory) {
@ -35,6 +36,7 @@ process GATK4_SPLITNCIGARREADS {
--input $bam \\ --input $bam \\
--output ${prefix}.bam \\ --output ${prefix}.bam \\
--reference $fasta \\ --reference $fasta \\
$interval_command \\
--tmp-dir . \\ --tmp-dir . \\
$args $args

View file

@ -23,6 +23,13 @@ input:
type: list type: list
description: BAM/SAM/CRAM file containing reads description: BAM/SAM/CRAM file containing reads
pattern: "*.{bam,sam,cram}" pattern: "*.{bam,sam,cram}"
- bai:
type: list
description: BAI/SAI/CRAI index file (optional)
pattern: "*.{bai,sai,crai}"
- intervals:
type: file
description: Bed file with the genomic regions included in the library (optional)
- fasta: - fasta:
type: file type: file
description: The reference fasta file description: The reference fasta file

View file

@ -0,0 +1,34 @@
process KRONA_KTIMPORTTEXT {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::krona=2.8.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/krona:2.8.1--pl5321hdfd78af_1':
'quay.io/biocontainers/krona:2.8.1--pl5321hdfd78af_1' }"
input:
tuple val(meta), path(report)
output:
tuple val(meta), path ('*.html'), emit: html
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
"""
ktImportText \\
$args \\
-o ${prefix}.html \\
$report
cat <<-END_VERSIONS > versions.yml
"${task.process}":
krona: \$( echo \$(ktImportText 2>&1) | sed 's/^.*KronaTools //g; s/- ktImportText.*\$//g')
END_VERSIONS
"""
}

View file

@ -0,0 +1,47 @@
name: "krona_ktimporttext"
description: Creates a Krona chart from text files listing quantities and lineages.
keywords:
- plot
- taxonomy
- interactive
- html
- visualisation
- krona chart
- metagenomics
tools:
- krona:
description: Krona Tools is a set of scripts to create Krona charts from several Bioinformatics tools as well as from text and XML files.
homepage: https://github.com/marbl/Krona/wiki/KronaTools
documentation: http://manpages.ubuntu.com/manpages/impish/man1/ktImportTaxonomy.1.html
tool_dev_url: https://github.com/marbl/Krona
doi: 10.1186/1471-2105-12-385
licence: https://raw.githubusercontent.com/marbl/Krona/master/KronaTools/LICENSE.txt
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test']
- report:
type: file
description: "Tab-delimited text file. Each line should be a number followed by a list of wedges to contribute to (starting from the highest level). If no wedges are listed (and just a quantity is given), it will contribute to the top level. If the same lineage is listed more than once, the values will be added. Quantities can be omitted if -q is specified. Lines beginning with '#' will be ignored."
pattern: "*.{txt}"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test' ]
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- html:
type: file
description: A html file containing an interactive krona plot.
pattern: "*.{html}"
authors:
- "@jianhong"

View file

@ -2,18 +2,22 @@ process MINIMAP2_ALIGN {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? 'bioconda::minimap2=2.21' : null) conda (params.enable_conda ? 'bioconda::minimap2=2.21 bioconda::samtools=1.12' : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/minimap2:2.21--h5bf99c6_0' : 'https://depot.galaxyproject.org/singularity/mulled-v2-66534bcbb7031a148b13e2ad42583020b9cd25c4:1679e915ddb9d6b4abda91880c4b48857d471bd8-0' :
'quay.io/biocontainers/minimap2:2.21--h5bf99c6_0' }" 'quay.io/biocontainers/mulled-v2-66534bcbb7031a148b13e2ad42583020b9cd25c4:1679e915ddb9d6b4abda91880c4b48857d471bd8-0' }"
input: input:
tuple val(meta), path(reads) tuple val(meta), path(reads)
path reference path reference
val bam_format
val cigar_paf_format
val cigar_bam
output: output:
tuple val(meta), path("*.paf"), emit: paf tuple val(meta), path("*.paf"), optional: true, emit: paf
path "versions.yml" , emit: versions tuple val(meta), path("*.bam"), optional: true, emit: bam
path "versions.yml" , emit: versions
when: when:
task.ext.when == null || task.ext.when task.ext.when == null || task.ext.when
@ -22,13 +26,19 @@ process MINIMAP2_ALIGN {
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}" def prefix = task.ext.prefix ?: "${meta.id}"
def input_reads = meta.single_end ? "$reads" : "${reads[0]} ${reads[1]}" def input_reads = meta.single_end ? "$reads" : "${reads[0]} ${reads[1]}"
def bam_output = bam_format ? "-a | samtools sort | samtools view -@ ${task.cpus} -b -h -o ${prefix}.bam" : "-o ${prefix}.paf"
def cigar_paf = cigar_paf_format && !bam_format ? "-c" : ''
def set_cigar_bam = cigar_bam && bam_format ? "-L" : ''
""" """
minimap2 \\ minimap2 \\
$args \\ $args \\
-t $task.cpus \\ -t $task.cpus \\
$reference \\ $reference \\
$input_reads \\ $input_reads \\
> ${prefix}.paf $cigar_paf \\
$set_cigar_bam \\
$bam_output
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -29,6 +29,17 @@ input:
type: file type: file
description: | description: |
Reference database in FASTA format. Reference database in FASTA format.
- bam_format:
type: boolean
description: Specify that output should be in BAM format
- cigar_paf_format:
type: boolean
description: Specify that output CIGAR should be in PAF format
- cigar_bam:
type: boolean
description: |
Write CIGAR with >65535 ops at the CG tag. This is recommended when
doing XYZ (https://github.com/lh3/minimap2#working-with-65535-cigar-operations)
output: output:
- meta: - meta:
type: map type: map
@ -39,9 +50,16 @@ output:
type: file type: file
description: Alignment in PAF format description: Alignment in PAF format
pattern: "*.paf" pattern: "*.paf"
- bam:
type: file
description: Alignment in BAM format
pattern: "*.bam"
- versions: - versions:
type: file type: file
description: File containing software versions description: File containing software versions
pattern: "versions.yml" pattern: "versions.yml"
authors: authors:
- "@heuermh" - "@heuermh"
- "@sofstam"
- "@sateeshperi"
- "@jfy133"

View file

@ -22,11 +22,12 @@ process PHANTOMPEAKQUALTOOLS {
task.ext.when == null || task.ext.when task.ext.when == null || task.ext.when
script: script:
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
def args2 = task.ext.args2 ?: ''
def prefix = task.ext.prefix ?: "${meta.id}" def prefix = task.ext.prefix ?: "${meta.id}"
""" """
RUN_SPP=`which run_spp.R` RUN_SPP=`which run_spp.R`
Rscript $args -e "library(caTools); source(\\"\$RUN_SPP\\")" -c="$bam" -savp="${prefix}.spp.pdf" -savd="${prefix}.spp.Rdata" -out="${prefix}.spp.out" Rscript $args -e "library(caTools); source(\\"\$RUN_SPP\\")" -c="$bam" -savp="${prefix}.spp.pdf" -savd="${prefix}.spp.Rdata" -out="${prefix}.spp.out" $args2
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_ADDORREPLACEREADGROUPS {
tag "$meta.id" tag "$meta.id"
label 'process_low' label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(bam) tuple val(meta), path(bam)
@ -38,12 +38,12 @@ process PICARD_ADDORREPLACEREADGROUPS {
-Xmx${avail_mem}g \\ -Xmx${avail_mem}g \\
--INPUT ${bam} \\ --INPUT ${bam} \\
--OUTPUT ${prefix}.bam \\ --OUTPUT ${prefix}.bam \\
-ID ${ID} \\ --RGID ${ID} \\
-LB ${LIBRARY} \\ --RGLB ${LIBRARY} \\
-PL ${PLATFORM} \\ --RGPL ${PLATFORM} \\
-PU ${BARCODE} \\ --RGPU ${BARCODE} \\
-SM ${SAMPLE} \\ --RGSM ${SAMPLE} \\
-CREATE_INDEX true --CREATE_INDEX true
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_CLEANSAM {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(bam) tuple val(meta), path(bam)
@ -31,8 +31,8 @@ process PICARD_CLEANSAM {
-Xmx${avail_mem}g \\ -Xmx${avail_mem}g \\
CleanSam \\ CleanSam \\
${args} \\ ${args} \\
-I ${bam} \\ --INPUT ${bam} \\
-O ${prefix}.bam --OUTPUT ${prefix}.bam
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_COLLECTHSMETRICS {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(bam) tuple val(meta), path(bam)
@ -38,10 +38,10 @@ process PICARD_COLLECTHSMETRICS {
CollectHsMetrics \\ CollectHsMetrics \\
$args \\ $args \\
$reference \\ $reference \\
-BAIT_INTERVALS $bait_intervals \\ --BAIT_INTERVALS $bait_intervals \\
-TARGET_INTERVALS $target_intervals \\ --TARGET_INTERVALS $target_intervals \\
-INPUT $bam \\ --INPUT $bam \\
-OUTPUT ${prefix}.CollectHsMetrics.coverage_metrics --OUTPUT ${prefix}.CollectHsMetrics.coverage_metrics
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml

View file

@ -2,10 +2,10 @@ process PICARD_COLLECTMULTIPLEMETRICS {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(bam) tuple val(meta), path(bam)
@ -33,9 +33,9 @@ process PICARD_COLLECTMULTIPLEMETRICS {
-Xmx${avail_mem}g \\ -Xmx${avail_mem}g \\
CollectMultipleMetrics \\ CollectMultipleMetrics \\
$args \\ $args \\
INPUT=$bam \\ --INPUT $bam \\
OUTPUT=${prefix}.CollectMultipleMetrics \\ --OUTPUT ${prefix}.CollectMultipleMetrics \\
REFERENCE_SEQUENCE=$fasta --REFERENCE_SEQUENCE $fasta
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -2,13 +2,13 @@ process PICARD_COLLECTWGSMETRICS {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(bam), path(bai) tuple val(meta), path(bam)
path fasta path fasta
output: output:
@ -32,9 +32,10 @@ process PICARD_COLLECTWGSMETRICS {
-Xmx${avail_mem}g \\ -Xmx${avail_mem}g \\
CollectWgsMetrics \\ CollectWgsMetrics \\
$args \\ $args \\
INPUT=$bam \\ --INPUT $bam \\
OUTPUT=${prefix}.CollectWgsMetrics.coverage_metrics \\ --OUTPUT ${prefix}.CollectWgsMetrics.coverage_metrics \\
REFERENCE_SEQUENCE=$fasta --REFERENCE_SEQUENCE $fasta
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_CREATESEQUENCEDICTIONARY {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(fasta) tuple val(meta), path(fasta)
@ -31,8 +31,8 @@ process PICARD_CREATESEQUENCEDICTIONARY {
-Xmx${avail_mem}g \\ -Xmx${avail_mem}g \\
CreateSequenceDictionary \\ CreateSequenceDictionary \\
$args \\ $args \\
R=$fasta \\ --REFERENCE $fasta \\
O=${prefix}.dict --OUTPUT ${prefix}.dict
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_CROSSCHECKFINGERPRINTS {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(input1) tuple val(meta), path(input1)

View file

@ -2,10 +2,10 @@ process PICARD_FILTERSAMREADS {
tag "$meta.id" tag "$meta.id"
label 'process_low' label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(bam), path(readlist) tuple val(meta), path(bam), path(readlist)

View file

@ -2,10 +2,10 @@ process PICARD_FIXMATEINFORMATION {
tag "$meta.id" tag "$meta.id"
label 'process_low' label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(bam) tuple val(meta), path(bam)
@ -31,8 +31,8 @@ process PICARD_FIXMATEINFORMATION {
picard \\ picard \\
FixMateInformation \\ FixMateInformation \\
-Xmx${avail_mem}g \\ -Xmx${avail_mem}g \\
-I ${bam} \\ --INPUT ${bam} \\
-O ${prefix}.bam \\ --OUTPUT ${prefix}.bam \\
--VALIDATION_STRINGENCY ${STRINGENCY} --VALIDATION_STRINGENCY ${STRINGENCY}
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml

View file

@ -2,10 +2,10 @@ process PICARD_LIFTOVERVCF {
tag "$meta.id" tag "$meta.id"
label 'process_low' label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(input_vcf) tuple val(meta), path(input_vcf)
@ -35,11 +35,11 @@ process PICARD_LIFTOVERVCF {
-Xmx${avail_mem}g \\ -Xmx${avail_mem}g \\
LiftoverVcf \\ LiftoverVcf \\
$args \\ $args \\
I=$input_vcf \\ --INPUT $input_vcf \\
O=${prefix}.lifted.vcf.gz \\ --OUTPUT ${prefix}.lifted.vcf.gz \\
CHAIN=$chain \\ --CHAIN $chain \\
REJECT=${prefix}.unlifted.vcf.gz \\ --REJECT ${prefix}.unlifted.vcf.gz \\
R=$fasta --REFERENCE_SEQUENCE $fasta
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_MARKDUPLICATES {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(bam) tuple val(meta), path(bam)
@ -33,9 +33,9 @@ process PICARD_MARKDUPLICATES {
-Xmx${avail_mem}g \\ -Xmx${avail_mem}g \\
MarkDuplicates \\ MarkDuplicates \\
$args \\ $args \\
I=$bam \\ --INPUT $bam \\
O=${prefix}.bam \\ --OUTPUT ${prefix}.bam \\
M=${prefix}.MarkDuplicates.metrics.txt --METRICS_FILE ${prefix}.MarkDuplicates.metrics.txt
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_MERGESAMFILES {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(bams) tuple val(meta), path(bams)
@ -33,8 +33,8 @@ process PICARD_MERGESAMFILES {
-Xmx${avail_mem}g \\ -Xmx${avail_mem}g \\
MergeSamFiles \\ MergeSamFiles \\
$args \\ $args \\
${'INPUT='+bam_files.join(' INPUT=')} \\ ${'--INPUT '+bam_files.join(' --INPUT ')} \\
OUTPUT=${prefix}.bam --OUTPUT ${prefix}.bam
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":
picard: \$( echo \$(picard MergeSamFiles --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:) picard: \$( echo \$(picard MergeSamFiles --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)

View file

@ -2,10 +2,10 @@ process PICARD_SORTSAM {
tag "$meta.id" tag "$meta.id"
label 'process_low' label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(bam) tuple val(meta), path(bam)

View file

@ -2,10 +2,10 @@ process PICARD_SORTVCF {
tag "$meta.id" tag "$meta.id"
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null) conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' : 'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }" 'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input: input:
tuple val(meta), path(vcf) tuple val(meta), path(vcf)
@ -22,8 +22,8 @@ process PICARD_SORTVCF {
script: script:
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}" def prefix = task.ext.prefix ?: "${meta.id}"
def seq_dict = sequence_dict ? "-SEQUENCE_DICTIONARY $sequence_dict" : "" def seq_dict = sequence_dict ? "--SEQUENCE_DICTIONARY $sequence_dict" : ""
def reference = reference ? "-REFERENCE_SEQUENCE $reference" : "" def reference = reference ? "--REFERENCE_SEQUENCE $reference" : ""
def avail_mem = 3 def avail_mem = 3
if (!task.memory) { if (!task.memory) {
log.info '[Picard SortVcf] Available memory not known - defaulting to 3GB. Specify process memory requirements to change this.' log.info '[Picard SortVcf] Available memory not known - defaulting to 3GB. Specify process memory requirements to change this.'

View file

@ -2,10 +2,10 @@ process RSEM_CALCULATEEXPRESSION {
tag "$meta.id" tag "$meta.id"
label 'process_high' label 'process_high'
conda (params.enable_conda ? "bioconda::rsem=1.3.3 bioconda::star=2.7.6a" : null) conda (params.enable_conda ? "bioconda::rsem=1.3.3 bioconda::star=2.7.10a" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/mulled-v2-cf0123ef83b3c38c13e3b0696a3f285d3f20f15b:606b713ec440e799d53a2b51a6e79dbfd28ecf3e-0' : 'https://depot.galaxyproject.org/singularity/mulled-v2-cf0123ef83b3c38c13e3b0696a3f285d3f20f15b:64aad4a4e144878400649e71f42105311be7ed87-0' :
'quay.io/biocontainers/mulled-v2-cf0123ef83b3c38c13e3b0696a3f285d3f20f15b:606b713ec440e799d53a2b51a6e79dbfd28ecf3e-0' }" 'quay.io/biocontainers/mulled-v2-cf0123ef83b3c38c13e3b0696a3f285d3f20f15b:64aad4a4e144878400649e71f42105311be7ed87-0' }"
input: input:
tuple val(meta), path(reads) tuple val(meta), path(reads)

View file

@ -2,10 +2,10 @@ process RSEM_PREPAREREFERENCE {
tag "$fasta" tag "$fasta"
label 'process_high' label 'process_high'
conda (params.enable_conda ? "bioconda::rsem=1.3.3 bioconda::star=2.7.6a" : null) conda (params.enable_conda ? "bioconda::rsem=1.3.3 bioconda::star=2.7.10a" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/mulled-v2-cf0123ef83b3c38c13e3b0696a3f285d3f20f15b:606b713ec440e799d53a2b51a6e79dbfd28ecf3e-0' : 'https://depot.galaxyproject.org/singularity/mulled-v2-cf0123ef83b3c38c13e3b0696a3f285d3f20f15b:64aad4a4e144878400649e71f42105311be7ed87-0' :
'quay.io/biocontainers/mulled-v2-cf0123ef83b3c38c13e3b0696a3f285d3f20f15b:606b713ec440e799d53a2b51a6e79dbfd28ecf3e-0' }" 'quay.io/biocontainers/mulled-v2-cf0123ef83b3c38c13e3b0696a3f285d3f20f15b:64aad4a4e144878400649e71f42105311be7ed87-0' }"
input: input:
path fasta, stageAs: "rsem/*" path fasta, stageAs: "rsem/*"

View file

@ -0,0 +1,35 @@
//There is a -L option to only output alignments in interval, might be an option for exons/panel data?
process SAMTOOLS_BAMTOCRAM {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::samtools=1.15.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/samtools:1.15.1--h1170115_0' :
'quay.io/biocontainers/samtools:1.15.1--h1170115_0' }"
input:
tuple val(meta), path(input), path(index)
path fasta
path fai
output:
tuple val(meta), path("*.cram"), path("*.crai"), emit: cram_crai
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
"""
samtools view --threads ${task.cpus} --reference ${fasta} -C $args $input > ${prefix}.cram
samtools index -@${task.cpus} ${prefix}.cram
cat <<-END_VERSIONS > versions.yml
"${task.process}":
samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
}

View file

@ -0,0 +1,52 @@
name: samtools_bamtocram
description: filter/convert and then index CRAM file
keywords:
- view
- index
- bam
- cram
tools:
- samtools:
description: |
SAMtools is a set of utilities for interacting with and post-processing
short DNA sequence read alignments in the SAM, BAM and CRAM formats, written by Heng Li.
These files are generated as output by short read aligners like BWA.
homepage: http://www.htslib.org/
documentation: hhttp://www.htslib.org/doc/samtools.html
doi: 10.1093/bioinformatics/btp352
licence: ["MIT"]
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- input:
type: file
description: BAM/SAM file
pattern: "*.{bam,sam}"
- index:
type: file
description: BAM/SAM index file
pattern: "*.{bai,sai}"
- fasta:
type: file
description: Reference file to create the CRAM file
pattern: "*.{fasta,fa}"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- cram_crai:
type: file
description: filtered/converted CRAM file + index
pattern: "*{.cram,.crai}"
- version:
type: file
description: File containing software version
pattern: "*.{version.txt}"
authors:
- "@FriederikeHanssen"
- "@maxulysse"

View file

@ -11,7 +11,8 @@ process TABIX_TABIX {
tuple val(meta), path(tab) tuple val(meta), path(tab)
output: output:
tuple val(meta), path("*.tbi"), emit: tbi tuple val(meta), path("*.tbi"), optional:true, emit: tbi
tuple val(meta), path("*.csi"), optional:true, emit: csi
path "versions.yml" , emit: versions path "versions.yml" , emit: versions
when: when:

View file

@ -31,6 +31,10 @@ output:
type: file type: file
description: tabix index file description: tabix index file
pattern: "*.{tbi}" pattern: "*.{tbi}"
- csi:
type: file
description: coordinate sorted index file
pattern: "*.{csi}"
- versions: - versions:
type: file type: file
description: File containing software versions description: File containing software versions

View file

@ -24,7 +24,7 @@ process TIDDIT_SV {
script: script:
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}" def prefix = task.ext.prefix ?: "${meta.id}"
def reference = fasta == "dummy_file.txt" ? "--ref $fasta" : "" def reference = fasta ? "--ref $fasta" : ""
""" """
tiddit \\ tiddit \\
--sv \\ --sv \\

View file

@ -11,12 +11,13 @@ process TRIMGALORE {
tuple val(meta), path(reads) tuple val(meta), path(reads)
output: output:
tuple val(meta), path("*.fq.gz") , emit: reads tuple val(meta), path("*{trimmed,val}*.fq.gz"), emit: reads
tuple val(meta), path("*report.txt"), emit: log tuple val(meta), path("*report.txt") , emit: log
path "versions.yml" , emit: versions path "versions.yml" , emit: versions
tuple val(meta), path("*.html"), emit: html optional true tuple val(meta), path("*unpaired*.fq.gz") , emit: unpaired, optional: true
tuple val(meta), path("*.zip") , emit: zip optional true tuple val(meta), path("*.html") , emit: html , optional: true
tuple val(meta), path("*.zip") , emit: zip , optional: true
when: when:
task.ext.when == null || task.ext.when task.ext.when == null || task.ext.when
@ -52,6 +53,7 @@ process TRIMGALORE {
$c_r1 \\ $c_r1 \\
$tpc_r1 \\ $tpc_r1 \\
${prefix}.fastq.gz ${prefix}.fastq.gz
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":
trimgalore: \$(echo \$(trim_galore --version 2>&1) | sed 's/^.*version //; s/Last.*\$//') trimgalore: \$(echo \$(trim_galore --version 2>&1) | sed 's/^.*version //; s/Last.*\$//')
@ -73,6 +75,7 @@ process TRIMGALORE {
$tpc_r2 \\ $tpc_r2 \\
${prefix}_1.fastq.gz \\ ${prefix}_1.fastq.gz \\
${prefix}_2.fastq.gz ${prefix}_2.fastq.gz
cat <<-END_VERSIONS > versions.yml cat <<-END_VERSIONS > versions.yml
"${task.process}": "${task.process}":
trimgalore: \$(echo \$(trim_galore --version 2>&1) | sed 's/^.*version //; s/Last.*\$//') trimgalore: \$(echo \$(trim_galore --version 2>&1) | sed 's/^.*version //; s/Last.*\$//')

View file

@ -37,6 +37,11 @@ output:
List of input adapter trimmed FastQ files of size 1 and 2 for List of input adapter trimmed FastQ files of size 1 and 2 for
single-end and paired-end data, respectively. single-end and paired-end data, respectively.
pattern: "*.{fq.gz}" pattern: "*.{fq.gz}"
- unpaired:
type: file
description: |
FastQ files containing unpaired reads from read 1 or read 2
pattern: "*unpaired*.fq.gz"
- html: - html:
type: file type: file
description: FastQC report (optional) description: FastQC report (optional)

View file

@ -0,0 +1,41 @@
//
// Run QC steps on BAM/CRAM files using Picard
//
include { PICARD_COLLECTMULTIPLEMETRICS } from '../../../modules/picard/collectmultiplemetrics/main'
include { PICARD_COLLECTWGSMETRICS } from '../../../modules/picard/collectwgsmetrics/main'
include { PICARD_COLLECTHSMETRICS } from '../../../modules/picard/collecthsmetrics/main'
workflow BAM_QC_PICARD {
take:
ch_bam // channel: [ val(meta), [ bam ]]
ch_fasta // channel: [ fasta ]
ch_fasta_fai // channel: [ fasta_fai ]
ch_bait_interval // channel: [ bait_interval ]
ch_target_interval // channel: [ target_interval ]
main:
ch_versions = Channel.empty()
ch_coverage_metrics = Channel.empty()
PICARD_COLLECTMULTIPLEMETRICS( ch_bam, ch_fasta )
ch_versions = ch_versions.mix(PICARD_COLLECTMULTIPLEMETRICS.out.versions.first())
if (ch_bait_interval || ch_target_interval) {
if (!ch_bait_interval) log.error("Bait interval channel is empty")
if (!ch_target_interval) log.error("Target interval channel is empty")
PICARD_COLLECTHSMETRICS( ch_bam, ch_fasta, ch_fasta_fai, ch_bait_interval, ch_target_interval )
ch_coverage_metrics = ch_coverage_metrics.mix(PICARD_COLLECTHSMETRICS.out.metrics)
ch_versions = ch_versions.mix(PICARD_COLLECTHSMETRICS.out.versions.first())
} else {
PICARD_COLLECTWGSMETRICS( ch_bam, ch_fasta )
ch_versions = ch_versions.mix(PICARD_COLLECTWGSMETRICS.out.versions.first())
ch_coverage_metrics = ch_coverage_metrics.mix(PICARD_COLLECTWGSMETRICS.out.metrics)
}
emit:
coverage_metrics = ch_coverage_metrics // channel: [ val(meta), [ coverage_metrics ] ]
multiple_metrics = PICARD_COLLECTMULTIPLEMETRICS.out.metrics // channel: [ val(meta), [ multiple_metrics ] ]
versions = ch_versions // channel: [ versions.yml ]
}

View file

@ -0,0 +1,60 @@
name: bam_qc
description: Produces comprehensive statistics from BAM file
keywords:
- statistics
- counts
- hs_metrics
- wgs_metrics
- bam
- sam
- cram
modules:
- picard/collectmultiplemetrics
- picard/collectwgsmetrics
- picard/collecthsmetrics
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- bam:
type: file
description: BAM/CRAM/SAM file
pattern: "*.{bam,cram,sam}"
- fasta:
type: optional file
description: Reference fasta file
pattern: "*.{fasta,fa}"
- fasta_fai:
type: optional file
description: Reference fasta file index
pattern: "*.{fasta,fa}.fai"
- bait_intervals:
type: optional file
description: An interval list file that contains the locations of the baits used.
pattern: "baits.interval_list"
- target_intervals:
type: optional file
description: An interval list file that contains the locations of the targets.
pattern: "targets.interval_list"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- coverage_metrics:
type: file
description: Alignment metrics files generated by picard CollectHsMetrics or CollectWgsMetrics
pattern: "*_metrics.txt"
- multiple_metrics:
type: file
description: Alignment metrics files generated by picard CollectMultipleMetrics
pattern: "*_{metrics}"
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
authors:
- "@matthdsm"

View file

@ -607,6 +607,10 @@ elprep/filter:
- modules/elprep/filter/** - modules/elprep/filter/**
- tests/modules/elprep/filter/** - tests/modules/elprep/filter/**
elprep/merge:
- modules/elprep/merge/**
- tests/modules/elprep/merge/**
elprep/split: elprep/split:
- modules/elprep/split/** - modules/elprep/split/**
- tests/modules/elprep/split/** - tests/modules/elprep/split/**
@ -1054,6 +1058,10 @@ krona/ktimporttaxonomy:
- modules/krona/ktimporttaxonomy/** - modules/krona/ktimporttaxonomy/**
- tests/modules/krona/ktimporttaxonomy/** - tests/modules/krona/ktimporttaxonomy/**
krona/ktimporttext:
- modules/krona/ktimporttext/**
- tests/modules/krona/ktimporttext/**
last/dotplot: last/dotplot:
- modules/last/dotplot/** - modules/last/dotplot/**
- tests/modules/last/dotplot/** - tests/modules/last/dotplot/**
@ -1599,6 +1607,10 @@ samtools/bam2fq:
- modules/samtools/bam2fq/** - modules/samtools/bam2fq/**
- tests/modules/samtools/bam2fq/** - tests/modules/samtools/bam2fq/**
samtools/bamtocram:
- modules/samtools/bamtocram/**
- tests/modules/samtools/bamtocram/**
samtools/collatefastq: samtools/collatefastq:
- modules/samtools/collatefastq/** - modules/samtools/collatefastq/**
- tests/modules/samtools/collatefastq/** - tests/modules/samtools/collatefastq/**

View file

@ -14,6 +14,7 @@ params {
genome_paf = "${test_data_dir}/genomics/sarscov2/genome/genome.paf" genome_paf = "${test_data_dir}/genomics/sarscov2/genome/genome.paf"
genome_sizes = "${test_data_dir}/genomics/sarscov2/genome/genome.sizes" genome_sizes = "${test_data_dir}/genomics/sarscov2/genome/genome.sizes"
transcriptome_fasta = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.fasta" transcriptome_fasta = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.fasta"
proteome_fasta = "${test_data_dir}/genomics/sarscov2/genome/proteome.fasta"
transcriptome_paf = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.paf" transcriptome_paf = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.paf"
test_bed = "${test_data_dir}/genomics/sarscov2/genome/bed/test.bed" test_bed = "${test_data_dir}/genomics/sarscov2/genome/bed/test.bed"
@ -109,6 +110,9 @@ params {
test_sequencing_summary = "${test_data_dir}/genomics/sarscov2/nanopore/sequencing_summary/test.sequencing_summary.txt" test_sequencing_summary = "${test_data_dir}/genomics/sarscov2/nanopore/sequencing_summary/test.sequencing_summary.txt"
} }
'metagenome' {
kraken_report = "${test_data_dir}/genomics/sarscov2/metagenome/test_1.kraken2.report.txt"
}
} }
'homo_sapiens' { 'homo_sapiens' {
'genome' { 'genome' {
@ -245,8 +249,8 @@ params {
test2_2_fastq_gz = "${test_data_dir}/genomics/homo_sapiens/illumina/fastq/test2_2.fastq.gz" test2_2_fastq_gz = "${test_data_dir}/genomics/homo_sapiens/illumina/fastq/test2_2.fastq.gz"
test2_umi_1_fastq_gz = "${test_data_dir}/genomics/homo_sapiens/illumina/fastq/test2.umi_1.fastq.gz" test2_umi_1_fastq_gz = "${test_data_dir}/genomics/homo_sapiens/illumina/fastq/test2.umi_1.fastq.gz"
test2_umi_2_fastq_gz = "${test_data_dir}/genomics/homo_sapiens/illumina/fastq/test2.umi_2.fastq.gz" test2_umi_2_fastq_gz = "${test_data_dir}/genomics/homo_sapiens/illumina/fastq/test2.umi_2.fastq.gz"
test_rnaseq_1_fastq_gz = "${test_data_dir}/genomics/homo_sapiens/illumina/fastq/test.rnaseq_1.fastq.gz" test_rnaseq_1_fastq_gz = "${test_data_dir}/genomics/homo_sapiens/illumina/fastq/test_rnaseq_1.fastq.gz"
test_rnaseq_2_fastq_gz = "${test_data_dir}/genomics/homo_sapiens/illumina/fastq/test.rnaseq_2.fastq.gz" test_rnaseq_2_fastq_gz = "${test_data_dir}/genomics/homo_sapiens/illumina/fastq/test_rnaseq_2.fastq.gz"
test_baserecalibrator_table = "${test_data_dir}/genomics/homo_sapiens/illumina/gatk/test.baserecalibrator.table" test_baserecalibrator_table = "${test_data_dir}/genomics/homo_sapiens/illumina/gatk/test.baserecalibrator.table"
test2_baserecalibrator_table = "${test_data_dir}/genomics/homo_sapiens/illumina/gatk/test2.baserecalibrator.table" test2_baserecalibrator_table = "${test_data_dir}/genomics/homo_sapiens/illumina/gatk/test2.baserecalibrator.table"

View file

@ -1,14 +1,17 @@
- name: antismash antismashlitedownloaddatabases test_antismash_antismashlitedownloaddatabases - name: antismash antismashlitedownloaddatabases test_antismash_antismashlitedownloaddatabases
command: nextflow run tests/modules/antismash/antismashlitedownloaddatabases -entry test_antismash_antismashlitedownloaddatabases -c tests/config/nextflow.config command: nextflow run tests/modules/antismash/antismashlitedownloaddatabases -entry test_antismash_antismashlitedownloaddatabases -c tests/config/nextflow.config
tags: tags:
- antismash/antismashlitedownloaddatabases
- antismash - antismash
- antismash/antismashlitedownloaddatabases
files: files:
- path: output/antismash/versions.yml - path: output/antismash/versions.yml
md5sum: e2656c8d2bcc7469eba40eb1ee5c91b3 md5sum: 24859c67023abab99de295d3675a24b6
- path: output/antismash/antismash_db - path: output/antismash/antismash_db
- path: output/antismash/antismash_db/clusterblast - path: output/antismash/antismash_db/clusterblast
- path: output/antismash/antismash_db/clustercompare - path: output/antismash/antismash_db/clustercompare
- path: output/antismash/antismash_db/pfam - path: output/antismash/antismash_db/pfam
- path: output/antismash/antismash_db/resfam - path: output/antismash/antismash_db/resfam
- path: output/antismash/antismash_db/tigrfam - path: output/antismash/antismash_db/tigrfam
- path: output/antismash/css
- path: output/antismash/detection
- path: output/antismash/modules

View file

@ -2,13 +2,29 @@
nextflow.enable.dsl = 2 nextflow.enable.dsl = 2
include { BAMTOOLS_SPLIT } from '../../../../modules/bamtools/split/main.nf' include { BAMTOOLS_SPLIT as BAMTOOLS_SPLIT_SINGLE } from '../../../../modules/bamtools/split/main.nf'
include { BAMTOOLS_SPLIT as BAMTOOLS_SPLIT_MULTIPLE } from '../../../../modules/bamtools/split/main.nf'
workflow test_bamtools_split { workflow test_bamtools_split_single_input {
input = [ input = [
[ id:'test', single_end:false ], // meta map [ id:'test', single_end:false ], // meta map
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true) ] file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
]
BAMTOOLS_SPLIT ( input ) BAMTOOLS_SPLIT_SINGLE ( input )
} }
workflow test_bamtools_split_multiple {
input = [
[ id:'test', single_end:false ], // meta map
[
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
file(params.test_data['homo_sapiens']['illumina']['test2_paired_end_sorted_bam'], checkIfExists: true)
]
]
BAMTOOLS_SPLIT_MULTIPLE ( input )
}

View file

@ -1,10 +1,23 @@
- name: bamtools split test_bamtools_split - name: bamtools split test_bamtools_split_single_input
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split_single_input -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
tags: tags:
- bamtools/split
- bamtools - bamtools
- bamtools/split
files: files:
- path: output/bamtools/test.paired_end.sorted.REF_chr22.bam - path: output/bamtools/test.REF_chr22.bam
md5sum: b7dc50e0edf9c6bfc2e3b0e6d074dc07 md5sum: b7dc50e0edf9c6bfc2e3b0e6d074dc07
- path: output/bamtools/test.paired_end.sorted.REF_unmapped.bam - path: output/bamtools/test.REF_unmapped.bam
md5sum: e0754bf72c51543b2d745d96537035fb md5sum: e0754bf72c51543b2d745d96537035fb
- path: output/bamtools/versions.yml
- name: bamtools split test_bamtools_split_multiple
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split_multiple -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
tags:
- bamtools
- bamtools/split
files:
- path: output/bamtools/test.REF_chr22.bam
md5sum: 585675bea34c48ebe9db06a561d4b4fa
- path: output/bamtools/test.REF_unmapped.bam
md5sum: 16ad644c87b9471f3026bc87c98b4963
- path: output/bamtools/versions.yml

View file

@ -7,9 +7,22 @@ include { DIAMOND_BLASTP } from '../../../../modules/diamond/blastp/main.nf'
workflow test_diamond_blastp { workflow test_diamond_blastp {
db = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ] db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ] fasta = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
out_ext = 'txt'
blast_columns = 'qseqid qlen'
DIAMOND_MAKEDB ( db ) DIAMOND_MAKEDB ( db )
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db ) DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
}
workflow test_diamond_blastp_daa {
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
out_ext = 'daa'
blast_columns = []
DIAMOND_MAKEDB ( db )
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
} }

View file

@ -1,8 +1,17 @@
- name: diamond blastp - name: diamond blastp test_diamond_blastp
command: nextflow run ./tests/modules/diamond/blastp -entry test_diamond_blastp -c ./tests/config/nextflow.config -c ./tests/modules/diamond/blastp/nextflow.config command: nextflow run tests/modules/diamond/blastp -entry test_diamond_blastp -c tests/config/nextflow.config
tags: tags:
- diamond
- diamond/blastp - diamond/blastp
- diamond
files: files:
- path: ./output/diamond/test.diamond_blastp.txt - path: output/diamond/test.diamond_blastp.txt
md5sum: 3ca7f6290c1d8741c573370e6f8b4db0 - path: output/diamond/versions.yml
- name: diamond blastp test_diamond_blastp_daa
command: nextflow run tests/modules/diamond/blastp -entry test_diamond_blastp_daa -c tests/config/nextflow.config
tags:
- diamond/blastp
- diamond
files:
- path: output/diamond/test.diamond_blastp.daa
- path: output/diamond/versions.yml

View file

@ -7,9 +7,22 @@ include { DIAMOND_BLASTX } from '../../../../modules/diamond/blastx/main.nf'
workflow test_diamond_blastx { workflow test_diamond_blastx {
db = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ] db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ] fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
out_ext = 'tfdfdt' // Nonsense file extension to check default case.
blast_columns = 'qseqid qlen'
DIAMOND_MAKEDB ( db ) DIAMOND_MAKEDB ( db )
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db ) DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
}
workflow test_diamond_blastx_daa {
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
out_ext = 'daa'
blast_columns = []
DIAMOND_MAKEDB ( db )
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
} }

View file

@ -1,8 +1,18 @@
- name: diamond blastx - name: diamond blastx test_diamond_blastx
command: nextflow run ./tests/modules/diamond/blastx -entry test_diamond_blastx -c ./tests/config/nextflow.config -c ./tests/modules/diamond/blastx/nextflow.config command: nextflow run tests/modules/diamond/blastx -entry test_diamond_blastx -c tests/config/nextflow.config
tags: tags:
- diamond - diamond
- diamond/blastx - diamond/blastx
files: files:
- path: ./output/diamond/test.diamond_blastx.txt - path: output/diamond/test.diamond_blastx.txt
md5sum: d41d8cd98f00b204e9800998ecf8427e - path: output/diamond/versions.yml
- name: diamond blastx test_diamond_blastx_daa
command: nextflow run tests/modules/diamond/blastx -entry test_diamond_blastx_daa -c tests/config/nextflow.config
tags:
- diamond
- diamond/blastx
files:
- path: output/diamond/test.diamond_blastx.daa
md5sum: 0df4a833408416f32981415873facc11
- path: output/diamond/versions.yml

View file

@ -6,7 +6,7 @@ include { DIAMOND_MAKEDB } from '../../../../modules/diamond/makedb/main.nf'
workflow test_diamond_makedb { workflow test_diamond_makedb {
input = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ] input = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
DIAMOND_MAKEDB ( input ) DIAMOND_MAKEDB ( input )
} }

View file

@ -1,8 +1,9 @@
- name: diamond makedb test_diamond_makedb - name: diamond makedb test_diamond_makedb
command: nextflow run ./tests/modules/diamond/makedb -entry test_diamond_makedb -c ./tests/config/nextflow.config -c ./tests/modules/diamond/makedb/nextflow.config command: nextflow run tests/modules/diamond/makedb -entry test_diamond_makedb -c tests/config/nextflow.config
tags: tags:
- diamond
- diamond/makedb - diamond/makedb
- diamond
files: files:
- path: output/diamond/genome.fasta.dmnd - path: output/diamond/proteome.fasta.dmnd
md5sum: 2447fb376394c20d43ea3aad2aa5d15d md5sum: fc28c50b202dd7a7c5451cddff2ba1f4
- path: output/diamond/versions.yml

View file

@ -0,0 +1,17 @@
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
include { ELPREP_SPLIT } from '../../../../modules/elprep/split/main.nf'
include { ELPREP_MERGE } from '../../../../modules/elprep/merge/main.nf'
workflow test_elprep_merge {
input = [
[ id:'test', single_end:false ], // meta map
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
]
ELPREP_SPLIT ( input )
ELPREP_MERGE ( ELPREP_SPLIT.out.bam )
}

View file

@ -0,0 +1,5 @@
process {
withName : ELPREP_MERGE {
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
}
}

View file

@ -0,0 +1,8 @@
- name: elprep merge test_elprep_merge
command: nextflow run tests/modules/elprep/merge -entry test_elprep_merge -c tests/config/nextflow.config
tags:
- elprep
- elprep/merge
files:
- path: output/elprep/output/test.bam
- path: output/elprep/versions.yml

View file

@ -6,7 +6,23 @@ include { GATK4_SPLITNCIGARREADS } from '../../../../modules/gatk4/splitncigarre
workflow test_gatk4_splitncigarreads { workflow test_gatk4_splitncigarreads {
input = [ [ id:'test' ], // meta map input = [ [ id:'test' ], // meta map
[ file(params.test_data['sarscov2']['illumina']['test_paired_end_bam'], checkIfExists: true) ] file(params.test_data['sarscov2']['illumina']['test_paired_end_bam'], checkIfExists: true),
[],
[]
]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)
dict = file(params.test_data['sarscov2']['genome']['genome_dict'], checkIfExists: true)
GATK4_SPLITNCIGARREADS ( input, fasta, fai, dict )
}
workflow test_gatk4_splitncigarreads_intervals {
input = [ [ id:'test' ], // meta map
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam_bai'], checkIfExists: true),
file(params.test_data['sarscov2']['genome']['test_bed'], checkIfExists: true)
] ]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true) fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)

View file

@ -5,5 +5,14 @@
- gatk4/splitncigarreads - gatk4/splitncigarreads
files: files:
- path: output/gatk4/test.bam - path: output/gatk4/test.bam
md5sum: ceed15c0bd64ff5c38d3816905933b0b md5sum: 436d8e31285c6b588bdd1c7f1d07f6f2
- path: output/gatk4/versions.yml
- name: gatk4 splitncigarreads test_gatk4_splitncigarreads_intervals
command: nextflow run tests/modules/gatk4/splitncigarreads -entry test_gatk4_splitncigarreads_intervals -c tests/config/nextflow.config
tags:
- gatk4
- gatk4/splitncigarreads
files:
- path: output/gatk4/test.bam
md5sum: cd56e3225950f519fd47164cca60a0bb
- path: output/gatk4/versions.yml - path: output/gatk4/versions.yml

View file

@ -0,0 +1,31 @@
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
include { KRONA_KTIMPORTTEXT } from '../../../../modules/krona/ktimporttext/main.nf'
workflow test_krona_ktimporttext_multi {
input = [
[ id:'test', single_end:false ], // meta map
[
file('https://raw.githubusercontent.com/nf-core/test-datasets/modules/data/delete_me/krona/ktimporttext.txt', checkIfExists: true), // krona default test file
file(params.test_data['sarscov2']['metagenome']['kraken_report'], checkIfExists: true), //Kraken2 report file
file('https://raw.githubusercontent.com/nf-core/test-datasets/modules/data/delete_me/krona/kaiju_out4krona.txt', checkIfExists: true) // Kaiju output 4 krona
]
]
KRONA_KTIMPORTTEXT ( input )
}
workflow test_krona_ktimporttext_single {
input = [
[ id:'test', single_end:false ], // meta map
[
file('http://krona.sourceforge.net/examples/text.txt', checkIfExists: true) // krona default test file
]
]
KRONA_KTIMPORTTEXT ( input )
}

View file

@ -0,0 +1,5 @@
process {
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
}

View file

@ -0,0 +1,19 @@
- name: krona ktimporttext test_krona_ktimporttext_multi
command: nextflow run tests/modules/krona/ktimporttext -entry test_krona_ktimporttext_multi -c tests/config/nextflow.config
tags:
- krona
- krona/ktimporttext
files:
- path: output/krona/test.html
contains:
- "DOCTYPE html PUBLIC"
- name: krona ktimporttext test_krona_ktimporttext_single
command: nextflow run tests/modules/krona/ktimporttext -entry test_krona_ktimporttext_single -c tests/config/nextflow.config
tags:
- krona
- krona/ktimporttext
files:
- path: output/krona/test.html
contains:
- "DOCTYPE html PUBLIC"

View file

@ -9,8 +9,11 @@ workflow test_minimap2_align_single_end {
[ file(params.test_data['sarscov2']['illumina']['test_1_fastq_gz'], checkIfExists: true)] [ file(params.test_data['sarscov2']['illumina']['test_1_fastq_gz'], checkIfExists: true)]
] ]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
bam_format = true
cigar_paf_format = false
cigar_bam = false
MINIMAP2_ALIGN ( input, fasta ) MINIMAP2_ALIGN ( input, fasta, bam_format, cigar_paf_format, cigar_bam)
} }
workflow test_minimap2_align_paired_end { workflow test_minimap2_align_paired_end {
@ -19,6 +22,9 @@ workflow test_minimap2_align_paired_end {
file(params.test_data['sarscov2']['illumina']['test_2_fastq_gz'], checkIfExists: true) ] file(params.test_data['sarscov2']['illumina']['test_2_fastq_gz'], checkIfExists: true) ]
] ]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
bam_format = true
cigar_paf_format = false
cigar_bam = false
MINIMAP2_ALIGN ( input, fasta ) MINIMAP2_ALIGN ( input, fasta, bam_format, cigar_paf_format, cigar_bam )
} }

View file

@ -1,17 +1,17 @@
- name: minimap2 align single-end - name: minimap2 align test_minimap2_align_single_end
command: nextflow run ./tests/modules/minimap2/align -entry test_minimap2_align_single_end -c ./tests/config/nextflow.config -c ./tests/modules/minimap2/align/nextflow.config command: nextflow run tests/modules/minimap2/align -entry test_minimap2_align_single_end -c tests/config/nextflow.config
tags: tags:
- minimap2 - minimap2
- minimap2/align - minimap2/align
files: files:
- path: ./output/minimap2/test.paf - path: output/minimap2/test.bam
md5sum: 70e8cf299ee3ecd33e629d10c1f588ce - path: output/minimap2/versions.yml
- name: minimap2 align paired-end - name: minimap2 align test_minimap2_align_paired_end
command: nextflow run ./tests/modules/minimap2/align -entry test_minimap2_align_paired_end -c ./tests/config/nextflow.config -c ./tests/modules/minimap2/align/nextflow.config command: nextflow run tests/modules/minimap2/align -entry test_minimap2_align_paired_end -c tests/config/nextflow.config
tags: tags:
- minimap2 - minimap2
- minimap2/align - minimap2/align
files: files:
- path: ./output/minimap2/test.paf - path: output/minimap2/test.bam
md5sum: 5e7b55a26bf0ea3a2843423d3e0b9a28 - path: output/minimap2/versions.yml

View file

@ -7,4 +7,3 @@
- path: output/picard/test.bam - path: output/picard/test.bam
md5sum: 7b82f3461c2d80fc6a10385e78c9427f md5sum: 7b82f3461c2d80fc6a10385e78c9427f
- path: output/picard/versions.yml - path: output/picard/versions.yml
md5sum: 8a2d176295e1343146ea433c79bb517f

View file

@ -7,4 +7,3 @@
- path: output/picard/test.bam - path: output/picard/test.bam
md5sum: a48f8e77a1480445efc57570c3a38a68 md5sum: a48f8e77a1480445efc57570c3a38a68
- path: output/picard/versions.yml - path: output/picard/versions.yml
md5sum: e6457d7c6de51bf6f4b577eda65e57ac

View file

@ -6,8 +6,7 @@ include { PICARD_COLLECTWGSMETRICS } from '../../../../modules/picard/collectwgs
workflow test_picard_collectwgsmetrics { workflow test_picard_collectwgsmetrics {
input = [ [ id:'test', single_end:false ], // meta map input = [ [ id:'test', single_end:false ], // meta map
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true), file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam_bai'], checkIfExists: true)
] ]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)

View file

@ -7,4 +7,3 @@
- path: output/picard/test.dict - path: output/picard/test.dict
contains: ["SN:MT192765.1"] contains: ["SN:MT192765.1"]
- path: output/picard/versions.yml - path: output/picard/versions.yml
md5sum: b3d8c7ea65b8a6d3237b153d13fe2014

View file

@ -7,4 +7,3 @@
- path: output/picard/test.bam - path: output/picard/test.bam
md5sum: 746102e8c242c0ef42e045c49d320030 md5sum: 746102e8c242c0ef42e045c49d320030
- path: output/picard/versions.yml - path: output/picard/versions.yml
md5sum: 4329ba7cdca8f4f6018dfd5c019ba2eb

View file

@ -1,5 +1,5 @@
process { process {
ext.args = "WARN_ON_MISSING_CONTIG=true" ext.args = "--WARN_ON_MISSING_CONTIG true"
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" } publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
} }

View file

@ -3,7 +3,7 @@ process {
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" } publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
withName: PICARD_MARKDUPLICATES_UNSORTED { withName: PICARD_MARKDUPLICATES_UNSORTED {
ext.args = 'ASSUME_SORT_ORDER=queryname' ext.args = '--ASSUME_SORT_ORDER queryname'
} }
} }

View file

@ -42,7 +42,7 @@
- path: output/rsem/rsem/genome.transcripts.fa - path: output/rsem/rsem/genome.transcripts.fa
md5sum: 050c521a2719c2ae48267c1e65218f29 md5sum: 050c521a2719c2ae48267c1e65218f29
- path: output/rsem/rsem/genomeParameters.txt - path: output/rsem/rsem/genomeParameters.txt
md5sum: 2fe3a030e1706c3e8cd4df3818e6dd2f md5sum: df5a456e3242520cc36e0083a6a7d9dd
- path: output/rsem/rsem/sjdbInfo.txt - path: output/rsem/rsem/sjdbInfo.txt
md5sum: 5690ea9d9f09f7ff85b7fd47bd234903 md5sum: 5690ea9d9f09f7ff85b7fd47bd234903
- path: output/rsem/rsem/sjdbList.fromGTF.out.tab - path: output/rsem/rsem/sjdbList.fromGTF.out.tab
@ -63,4 +63,4 @@
- path: output/rsem/test.stat/test.theta - path: output/rsem/test.stat/test.theta
md5sum: de2e4490c98cc5383a86ae8225fd0a28 md5sum: de2e4490c98cc5383a86ae8225fd0a28
- path: output/rsem/test.transcript.bam - path: output/rsem/test.transcript.bam
md5sum: 7846491086c478858419667d60f18edd md5sum: ed681d39f5700ffc74d6321525330d93

View file

@ -0,0 +1,17 @@
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
include { SAMTOOLS_BAMTOCRAM } from '../../../../modules/samtools/bamtocram/main.nf'
workflow test_samtools_bamtocram {
input = [ [ id:'test', single_end:false ], // meta map
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam_bai'], checkIfExists: true)]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)
SAMTOOLS_BAMTOCRAM ( input, fasta, fai )
}

View file

@ -0,0 +1,5 @@
process {
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
}

View file

@ -0,0 +1,9 @@
- name: samtools bamtocram test_samtools_bamtocram
command: nextflow run ./tests/modules/samtools/bamtocram -entry test_samtools_bamtocram -c ./tests/config/nextflow.config -c ./tests/modules/samtools/bamtocram/nextflow.config
tags:
- samtools/bamtocram
- samtools
files:
- path: output/samtools/test.cram
- path: output/samtools/test.cram.crai
- path: output/samtools/versions.yml

View file

@ -2,9 +2,10 @@
nextflow.enable.dsl = 2 nextflow.enable.dsl = 2
include { TABIX_TABIX as TABIX_BED } from '../../../../modules/tabix/tabix/main.nf' include { TABIX_TABIX as TABIX_BED } from '../../../../modules/tabix/tabix/main.nf'
include { TABIX_TABIX as TABIX_GFF } from '../../../../modules/tabix/tabix/main.nf' include { TABIX_TABIX as TABIX_GFF } from '../../../../modules/tabix/tabix/main.nf'
include { TABIX_TABIX as TABIX_VCF } from '../../../../modules/tabix/tabix/main.nf' include { TABIX_TABIX as TABIX_VCF_TBI } from '../../../../modules/tabix/tabix/main.nf'
include { TABIX_TABIX as TABIX_VCF_CSI } from '../../../../modules/tabix/tabix/main.nf'
workflow test_tabix_tabix_bed { workflow test_tabix_tabix_bed {
input = [ [ id:'B.bed' ], // meta map input = [ [ id:'B.bed' ], // meta map
@ -22,10 +23,18 @@ workflow test_tabix_tabix_gff {
TABIX_GFF ( input ) TABIX_GFF ( input )
} }
workflow test_tabix_tabix_vcf { workflow test_tabix_tabix_vcf_tbi {
input = [ [ id:'test.vcf' ], // meta map input = [ [ id:'test.vcf' ], // meta map
[ file(params.test_data['sarscov2']['illumina']['test_vcf_gz'], checkIfExists: true) ] [ file(params.test_data['sarscov2']['illumina']['test_vcf_gz'], checkIfExists: true) ]
] ]
TABIX_VCF ( input ) TABIX_VCF_TBI ( input )
}
workflow test_tabix_tabix_vcf_csi {
input = [ [ id:'test.vcf' ], // meta map
[ file(params.test_data['sarscov2']['illumina']['test_vcf_gz'], checkIfExists: true) ]
]
TABIX_VCF_CSI ( input )
} }

View file

@ -10,8 +10,12 @@ process {
ext.args = '-p gff' ext.args = '-p gff'
} }
withName: TABIX_VCF { withName: TABIX_VCF_TBI {
ext.args = '-p vcf' ext.args = '-p vcf'
} }
withName: TABIX_VCF_CSI {
ext.args = '-p vcf --csi'
}
} }

View file

@ -15,10 +15,18 @@
- path: ./output/tabix/genome.gff3.gz.tbi - path: ./output/tabix/genome.gff3.gz.tbi
md5sum: f79a67d95a98076e04fbe0455d825926 md5sum: f79a67d95a98076e04fbe0455d825926
- name: tabix tabix vcf - name: tabix tabix vcf
command: nextflow run ./tests/modules/tabix/tabix -entry test_tabix_tabix_vcf -c ./tests/config/nextflow.config -c ./tests/modules/tabix/tabix/nextflow.config command: nextflow run ./tests/modules/tabix/tabix -entry test_tabix_tabix_vcf_tbi -c ./tests/config/nextflow.config -c ./tests/modules/tabix/tabix/nextflow.config
tags: tags:
- tabix - tabix
- tabix/tabix - tabix/tabix
files: files:
- path: output/tabix/test.vcf.gz.tbi - path: output/tabix/test.vcf.gz.tbi
md5sum: 36e11bf96ed0af4a92caa91a68612d64 md5sum: 36e11bf96ed0af4a92caa91a68612d64
- name: tabix tabix vcf csi
command: nextflow run ./tests/modules/tabix/tabix -entry test_tabix_tabix_vcf_csi -c ./tests/config/nextflow.config -c ./tests/modules/tabix/tabix/nextflow.config
tags:
- tabix
- tabix/tabix
files:
- path: output/tabix/test.vcf.gz.csi
md5sum: 5f930522d2b9dcdba2807b7da4dfa3fd

View file

@ -9,6 +9,7 @@
- path: output/tiddit/test.signals.tab - path: output/tiddit/test.signals.tab
md5sum: dab4b2fec4ddf8eb1c23005b0770150e md5sum: dab4b2fec4ddf8eb1c23005b0770150e
- path: output/tiddit/test.vcf - path: output/tiddit/test.vcf
md5sum: bdce14ae8292bf3deb81f6f255baf859
- name: tiddit sv no ref - name: tiddit sv no ref
command: nextflow run ./tests/modules/tiddit/sv -entry test_tiddit_sv_no_ref -c ./tests/config/nextflow.config -c ./tests/modules/tiddit/sv/nextflow.config command: nextflow run ./tests/modules/tiddit/sv -entry test_tiddit_sv_no_ref -c ./tests/config/nextflow.config -c ./tests/modules/tiddit/sv/nextflow.config
@ -21,3 +22,4 @@
- path: output/tiddit/test.signals.tab - path: output/tiddit/test.signals.tab
md5sum: dab4b2fec4ddf8eb1c23005b0770150e md5sum: dab4b2fec4ddf8eb1c23005b0770150e
- path: output/tiddit/test.vcf - path: output/tiddit/test.vcf
md5sum: 3d0e83a8199b2bdb81cfe3e6b12bf64b

View file

@ -0,0 +1,27 @@
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
include { BAM_QC_PICARD } from '../../../../subworkflows/nf-core/bam_qc_picard/main' addParams([:])
workflow test_bam_qc_picard_wgs {
input = [ [ id:'test', single_end:false ], // meta map
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
fasta_fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)
BAM_QC_PICARD ( input, fasta, fasta_fai, [], [] )
}
workflow test_bam_qc_picard_targetted {
input = [ [ id:'test', single_end:false ], // meta map
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
fasta_fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)
bait = file(params.test_data['sarscov2']['genome']['baits_interval_list'], checkIfExists: true)
target = file(params.test_data['sarscov2']['genome']['targets_interval_list'], checkIfExists: true)
BAM_QC_PICARD ( input, fasta, fasta_fai, bait, target )
}

View file

@ -0,0 +1,5 @@
process {
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
}

View file

@ -0,0 +1,33 @@
- name: bam qc picard wgs
command: nextflow run ./tests/subworkflows/nf-core/bam_qc_picard -entry test_bam_qc_picard_wgs -c tests/config/nextflow.config
tags:
- subworkflows
# - subworkflows/bam_qc_picard
# Modules
# - picard
# - picard/collectmultiplemetrics
# - picard/collectwgsmetrics
files:
- path: ./output/picard/test.CollectMultipleMetrics.alignment_summary_metrics
- path: ./output/picard/test.CollectMultipleMetrics.insert_size_metrics
- path: ./output/picard/test.CollectMultipleMetrics.base_distribution_by_cycle_metrics
- path: ./output/picard/test.CollectMultipleMetrics.quality_by_cycle_metrics
- path: ./output/picard/test.CollectMultipleMetrics.quality_distribution_metrics
- path: ./output/picard/test.CollectWgsMetrics.coverage_metrics
- name: bam qc picard targetted
command: nextflow run ./tests/subworkflows/nf-core/bam_qc_picard -entry test_bam_qc_picard_targetted -c tests/config/nextflow.config
tags:
- subworkflows
# - subworkflows/bam_qc_picard
# Modules
# - picard
# - picard/collectmultiplemetrics
# - picard/collecthsmetrics
files:
- path: ./output/picard/test.CollectMultipleMetrics.alignment_summary_metrics
- path: ./output/picard/test.CollectMultipleMetrics.insert_size_metrics
- path: ./output/picard/test.CollectMultipleMetrics.base_distribution_by_cycle_metrics
- path: ./output/picard/test.CollectMultipleMetrics.quality_by_cycle_metrics
- path: ./output/picard/test.CollectMultipleMetrics.quality_distribution_metrics
- path: ./output/picard/test.CollectHsMetrics.coverage_metrics