Merge branch 'master' into antismash_db_output

This commit is contained in:
Jasmin F 2022-05-02 17:11:51 +02:00 committed by GitHub
commit 235944f797
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
56 changed files with 748 additions and 272 deletions

View file

@ -1,64 +0,0 @@
---
name: Bug report
about: Report something that is broken or incorrect
title: "[BUG]"
---
<!--
# nf-core/module bug report
Hi there!
Thanks for telling us about a problem with the modules.
Please delete this text and anything that's not relevant from the template below:
-->
## Check Documentation
I have checked the following places for your error:
- [ ] [nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)
- [ ] [nf-core/module documentation](https://github.com/nf-core/modules/blob/master/README.md)
## Description of the bug
<!-- A clear and concise description of what the bug is. -->
## Steps to reproduce
Steps to reproduce the behaviour:
1. Command line: <!-- [e.g. `nextflow run ...`] -->
2. See error: <!-- [Please provide your error message] -->
## Expected behaviour
<!-- A clear and concise description of what you expected to happen. -->
## Log files
Have you provided the following extra information/files:
- [ ] The command used to run the module
- [ ] The `.nextflow.log` file <!-- this is a hidden file in the directory where you launched the module -->
## System
- Hardware: <!-- [e.g. HPC, Desktop, Cloud...] -->
- Executor: <!-- [e.g. slurm, local, awsbatch...] -->
- OS: <!-- [e.g. CentOS Linux, macOS, Linux Mint...] -->
- Version <!-- [e.g. 7, 10.13.6, 18.3...] -->
## Nextflow Installation
- Version: <!-- [e.g. 19.10.0] -->
## Container engine
- Engine: <!-- [e.g. Conda, Docker, Singularity or Podman] -->
- version: <!-- [e.g. 1.0.0] -->
- Image tag: <!-- [e.g. nfcore/module:2.6] -->
## Additional context
<!-- Add any other context about the problem here. -->

52
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View file

@ -0,0 +1,52 @@
name: Bug report
description: Report something that is broken or incorrect
labels: bug
body:
- type: checkboxes
attributes:
label: Have you checked the docs?
description: I have checked the following places for my error
options:
- label: "[nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)"
required: true
- label: "[nf-core modules documentation](https://nf-co.re/docs/contributing/modules)"
required: true
- type: textarea
id: description
attributes:
label: Description of the bug
description: A clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: command_used
attributes:
label: Command used and terminal output
description: Steps to reproduce the behaviour. Please paste the command you used to launch the pipeline and the output from your terminal.
render: console
placeholder: |
$ nextflow run ...
Some output where something broke
- type: textarea
id: files
attributes:
label: Relevant files
description: |
Please drag and drop the relevant files here. Create a `.zip` archive if the extension is not allowed.
Your verbose log file `.nextflow.log` is often useful _(this is a hidden file in the directory where you launched the pipeline)_ as well as custom Nextflow configuration files.
- type: textarea
id: system
attributes:
label: System information
description: |
* Nextflow version _(eg. 21.10.3)_
* Hardware _(eg. HPC, Desktop, Cloud)_
* Executor _(eg. slurm, local, awsbatch)_
* Container engine and version: _(e.g. Docker 1.0.0, Singularity, Conda, Podman, Shifter or Charliecloud)_
* OS and version: _(eg. CentOS Linux, macOS, Ubuntu 22.04)_
* Image tag: <!-- [e.g. nfcore/cellranger:2.6] -->

View file

@ -1,32 +0,0 @@
---
name: Feature request
about: Suggest an idea for nf-core/modules
title: "[FEATURE]"
---
<!--
# nf-core/modules feature request
Hi there!
Thanks for suggesting a new feature for the modules!
Please delete this text and anything that's not relevant from the template below:
-->
## Is your feature request related to a problem? Please describe
<!-- A clear and concise description of what the problem is. -->
<!-- e.g. [I'm always frustrated when ...] -->
## Describe the solution you'd like
<!-- A clear and concise description of what you want to happen. -->
## Describe alternatives you've considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional context
<!-- Add any other context about the feature request here. -->

View file

@ -0,0 +1,32 @@
name: Feature request
description: Suggest an idea for nf-core/modules
labels: feature
title: "[FEATURE]"
body:
- type: textarea
id: description
attributes:
label: Is your feature request related to a problem? Please describe
description: A clear and concise description of what the bug is.
placeholder: |
<!-- e.g. [I'm always frustrated when ...] -->
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of the solution you want to happen.
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
- type: textarea
id: additional_context
attributes:
label: Additional context
description: Add any other context about the feature request here.

View file

@ -1,26 +0,0 @@
---
name: New module
about: Suggest a new module for nf-core/modules
title: "new module: TOOL/SUBTOOL"
label: new module
---
<!--
# nf-core/modules new module suggestion
Hi there!
Thanks for suggesting a new module for the modules!
Please delete this text and anything that's not relevant from the template below:
Replace TOOL with the bioconda name for the tool in the following text, so that the link is functional.
Replace TOOL/SUBTOOL in the issue title so that it's understandable.
-->
I think it would be good to have a module for [TOOL](https://bioconda.github.io/recipes/TOOL/README.html)
- [ ] This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
- [ ] There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
- [ ] There is no [open issue](https://github.com/nf-core/modules/issues) for this module
- [ ] If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module

36
.github/ISSUE_TEMPLATE/new_module.yml vendored Normal file
View file

@ -0,0 +1,36 @@
name: New module
description: Suggest a new module for nf-core/modules
title: "new module: TOOL/SUBTOOL"
labels: new module
body:
- type: checkboxes
attributes:
label: Is there an existing module for this?
description: This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
options:
- label: I have searched for the existing module
required: true
- type: checkboxes
attributes:
label: Is there an open PR for this?
description: There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
options:
- label: I have searched for existing PRs
required: true
- type: checkboxes
attributes:
label: Is there an open issue for this?
description: There is no [open issue](https://github.com/nf-core/modules/issues) for this module
options:
- label: I have searched for existing issues
required: true
- type: checkboxes
attributes:
label: Are you going to work on this?
description: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
options:
- label: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
required: false

View file

@ -2,10 +2,10 @@ process BAMTOOLS_SPLIT {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::bamtools=2.5.1" : null)
conda (params.enable_conda ? "bioconda::bamtools=2.5.2" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/bamtools:2.5.1--h9a82719_9' :
'quay.io/biocontainers/bamtools:2.5.1--h9a82719_9' }"
'https://depot.galaxyproject.org/singularity/bamtools:2.5.2--hd03093a_0' :
'quay.io/biocontainers/bamtools:2.5.2--hd03093a_0' }"
input:
tuple val(meta), path(bam)
@ -20,10 +20,14 @@ process BAMTOOLS_SPLIT {
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def input_list = bam.collect{"-in $it"}.join(' ')
"""
bamtools \\
merge \\
$input_list \\
| bamtools \\
split \\
-in $bam \\
-stub $prefix \\
$args
cat <<-END_VERSIONS > versions.yml

View file

@ -23,7 +23,7 @@ input:
e.g. [ id:'test', single_end:false ]
- bam:
type: file
description: A BAM file to split
description: A list of one or more BAM files to merge and then split
pattern: "*.bam"
output:
@ -43,3 +43,4 @@ output:
authors:
- "@sguizard"
- "@matthdsm"

View file

@ -2,19 +2,25 @@ process DIAMOND_BLASTP {
tag "$meta.id"
label 'process_medium'
// Dimaond is limited to v2.0.9 because there is not a
// singularity version higher than this at the current time.
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
input:
tuple val(meta), path(fasta)
path db
val out_ext
val blast_columns
output:
tuple val(meta), path('*.txt'), emit: txt
tuple val(meta), path('*.blast'), optional: true, emit: blast
tuple val(meta), path('*.xml') , optional: true, emit: xml
tuple val(meta), path('*.txt') , optional: true, emit: txt
tuple val(meta), path('*.daa') , optional: true, emit: daa
tuple val(meta), path('*.sam') , optional: true, emit: sam
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
tuple val(meta), path('*.paf') , optional: true, emit: paf
path "versions.yml" , emit: versions
when:
@ -23,6 +29,21 @@ process DIAMOND_BLASTP {
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def columns = blast_columns ? "${blast_columns}" : ''
switch ( out_ext ) {
case "blast": outfmt = 0; break
case "xml": outfmt = 5; break
case "txt": outfmt = 6; break
case "daa": outfmt = 100; break
case "sam": outfmt = 101; break
case "tsv": outfmt = 102; break
case "paf": outfmt = 103; break
default:
outfmt = '6';
out_ext = 'txt';
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
break
}
"""
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
@ -31,8 +52,9 @@ process DIAMOND_BLASTP {
--threads $task.cpus \\
--db \$DB \\
--query $fasta \\
--outfmt ${outfmt} ${columns} \\
$args \\
--out ${prefix}.txt
--out ${prefix}.${out_ext}
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -28,12 +28,50 @@ input:
type: directory
description: Directory containing the protein blast database
pattern: "*"
- out_ext:
type: string
description: |
Specify the type of output file to be generated. `blast` corresponds to
BLAST pairwise format. `xml` corresponds to BLAST xml format.
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
taxonomic classification format.
pattern: "blast|xml|txt|daa|sam|tsv|paf"
- blast_columns:
type: string
description: |
Optional space separated list of DIAMOND tabular BLAST output keywords
used for in conjunction with the 'txt' out_ext option (--outfmt 6). See
DIAMOND documnetation for more information.
output:
- txt:
- blast:
type: file
description: File containing blastp hits
pattern: "*.{blastp.txt}"
pattern: "*.{blast}"
- xml:
type: file
description: File containing blastp hits
pattern: "*.{xml}"
- txt:
type: file
description: File containing hits in tabular BLAST format.
pattern: "*.{txt}"
- daa:
type: file
description: File containing hits DAA format
pattern: "*.{daa}"
- sam:
type: file
description: File containing aligned reads in SAM format
pattern: "*.{sam}"
- tsv:
type: file
description: Tab separated file containing taxonomic classification of hits
pattern: "*.{tsv}"
- paf:
type: file
description: File containing aligned reads in pairwise mapping format format
pattern: "*.{paf}"
- versions:
type: file
description: File containing software versions
@ -41,3 +79,4 @@ output:
authors:
- "@spficklin"
- "@jfy133"

View file

@ -2,19 +2,25 @@ process DIAMOND_BLASTX {
tag "$meta.id"
label 'process_medium'
// Dimaond is limited to v2.0.9 because there is not a
// singularity version higher than this at the current time.
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
input:
tuple val(meta), path(fasta)
path db
val out_ext
val blast_columns
output:
tuple val(meta), path('*.txt'), emit: txt
tuple val(meta), path('*.blast'), optional: true, emit: blast
tuple val(meta), path('*.xml') , optional: true, emit: xml
tuple val(meta), path('*.txt') , optional: true, emit: txt
tuple val(meta), path('*.daa') , optional: true, emit: daa
tuple val(meta), path('*.sam') , optional: true, emit: sam
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
tuple val(meta), path('*.paf') , optional: true, emit: paf
path "versions.yml" , emit: versions
when:
@ -23,6 +29,21 @@ process DIAMOND_BLASTX {
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def columns = blast_columns ? "${blast_columns}" : ''
switch ( out_ext ) {
case "blast": outfmt = 0; break
case "xml": outfmt = 5; break
case "txt": outfmt = 6; break
case "daa": outfmt = 100; break
case "sam": outfmt = 101; break
case "tsv": outfmt = 102; break
case "paf": outfmt = 103; break
default:
outfmt = '6';
out_ext = 'txt';
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
break
}
"""
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
@ -31,8 +52,9 @@ process DIAMOND_BLASTX {
--threads $task.cpus \\
--db \$DB \\
--query $fasta \\
--outfmt ${outfmt} ${columns} \\
$args \\
--out ${prefix}.txt
--out ${prefix}.${out_ext}
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -28,12 +28,44 @@ input:
type: directory
description: Directory containing the nucelotide blast database
pattern: "*"
- out_ext:
type: string
description: |
Specify the type of output file to be generated. `blast` corresponds to
BLAST pairwise format. `xml` corresponds to BLAST xml format.
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
taxonomic classification format.
pattern: "blast|xml|txt|daa|sam|tsv|paf"
output:
- blast:
type: file
description: File containing blastp hits
pattern: "*.{blast}"
- xml:
type: file
description: File containing blastp hits
pattern: "*.{xml}"
- txt:
type: file
description: File containing blastx hits
pattern: "*.{blastx.txt}"
description: File containing hits in tabular BLAST format.
pattern: "*.{txt}"
- daa:
type: file
description: File containing hits DAA format
pattern: "*.{daa}"
- sam:
type: file
description: File containing aligned reads in SAM format
pattern: "*.{sam}"
- tsv:
type: file
description: Tab separated file containing taxonomic classification of hits
pattern: "*.{tsv}"
- paf:
type: file
description: File containing aligned reads in pairwise mapping format format
pattern: "*.{paf}"
- versions:
type: file
description: File containing software versions
@ -41,3 +73,4 @@ output:
authors:
- "@spficklin"
- "@jfy133"

View file

@ -2,12 +2,10 @@ process DIAMOND_MAKEDB {
tag "$fasta"
label 'process_medium'
// Dimaond is limited to v2.0.9 because there is not a
// singularity version higher than this at the current time.
conda (params.enable_conda ? 'bioconda::diamond=2.0.9' : null)
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
input:
path fasta

View file

@ -0,0 +1,43 @@
process ELPREP_MERGE {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::elprep=5.1.2" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/elprep:5.1.2--he881be0_0':
'quay.io/biocontainers/elprep:5.1.2--he881be0_0' }"
input:
tuple val(meta), path(bam)
output:
tuple val(meta), path("output/**.{bam,sam}") , emit: bam
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def suffix = args.contains("--output-type sam") ? "sam" : "bam"
def single_end = meta.single_end ? " --single-end" : ""
"""
# create directory and move all input so elprep can find and merge them before splitting
mkdir input
mv ${bam} input/
elprep merge \\
input/ \\
output/${prefix}.${suffix} \\
$args \\
${single_end} \\
--nr-of-threads $task.cpus
cat <<-END_VERSIONS > versions.yml
"${task.process}":
elprep: \$(elprep 2>&1 | head -n2 | tail -n1 |sed 's/^.*version //;s/ compiled.*\$//')
END_VERSIONS
"""
}

View file

@ -0,0 +1,44 @@
name: "elprep_merge"
description: Merge split bam/sam chunks in one file
keywords:
- bam
- sam
- merge
tools:
- "elprep":
description: "elPrep is a high-performance tool for preparing .sam/.bam files for variant calling in sequencing pipelines. It can be used as a drop-in replacement for SAMtools/Picard/GATK4."
homepage: "https://github.com/ExaScience/elprep"
documentation: "https://github.com/ExaScience/elprep"
tool_dev_url: "https://github.com/ExaScience/elprep"
doi: "10.1371/journal.pone.0244471"
licence: "['AGPL v3']"
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- bam:
type: file
description: List of BAM/SAM chunks to merge
pattern: "*.{bam,sam}"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
#
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- bam:
type: file
description: Merged BAM/SAM file
pattern: "*.{bam,sam}"
authors:
- "@matthdsm"

View file

@ -12,7 +12,7 @@ process GATK4_MARKDUPLICATES {
output:
tuple val(meta), path("*.bam") , emit: bam
tuple val(meta), path("*.bai") , emit: bai
tuple val(meta), path("*.bai") , optional:true, emit: bai
tuple val(meta), path("*.metrics"), emit: metrics
path "versions.yml" , emit: versions

View file

@ -27,8 +27,8 @@ process MINIMAP2_ALIGN {
def prefix = task.ext.prefix ?: "${meta.id}"
def input_reads = meta.single_end ? "$reads" : "${reads[0]} ${reads[1]}"
def bam_output = bam_format ? "-a | samtools sort | samtools view -@ ${task.cpus} -b -h -o ${prefix}.bam" : "-o ${prefix}.paf"
def cigar_paf = cigar_paf_format && !sam_format ? "-c" : ''
def set_cigar_bam = cigar_bam && sam_format ? "-L" : ''
def cigar_paf = cigar_paf_format && !bam_format ? "-c" : ''
def set_cigar_bam = cigar_bam && bam_format ? "-L" : ''
"""
minimap2 \\
$args \\

View file

@ -2,10 +2,10 @@ process PICARD_ADDORREPLACEREADGROUPS {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -38,12 +38,12 @@ process PICARD_ADDORREPLACEREADGROUPS {
-Xmx${avail_mem}g \\
--INPUT ${bam} \\
--OUTPUT ${prefix}.bam \\
-ID ${ID} \\
-LB ${LIBRARY} \\
-PL ${PLATFORM} \\
-PU ${BARCODE} \\
-SM ${SAMPLE} \\
-CREATE_INDEX true
--RGID ${ID} \\
--RGLB ${LIBRARY} \\
--RGPL ${PLATFORM} \\
--RGPU ${BARCODE} \\
--RGSM ${SAMPLE} \\
--CREATE_INDEX true
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_CLEANSAM {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -31,8 +31,8 @@ process PICARD_CLEANSAM {
-Xmx${avail_mem}g \\
CleanSam \\
${args} \\
-I ${bam} \\
-O ${prefix}.bam
--INPUT ${bam} \\
--OUTPUT ${prefix}.bam
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_COLLECTHSMETRICS {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -38,10 +38,10 @@ process PICARD_COLLECTHSMETRICS {
CollectHsMetrics \\
$args \\
$reference \\
-BAIT_INTERVALS $bait_intervals \\
-TARGET_INTERVALS $target_intervals \\
-INPUT $bam \\
-OUTPUT ${prefix}.CollectHsMetrics.coverage_metrics
--BAIT_INTERVALS $bait_intervals \\
--TARGET_INTERVALS $target_intervals \\
--INPUT $bam \\
--OUTPUT ${prefix}.CollectHsMetrics.coverage_metrics
cat <<-END_VERSIONS > versions.yml

View file

@ -2,10 +2,10 @@ process PICARD_COLLECTMULTIPLEMETRICS {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -33,9 +33,9 @@ process PICARD_COLLECTMULTIPLEMETRICS {
-Xmx${avail_mem}g \\
CollectMultipleMetrics \\
$args \\
INPUT=$bam \\
OUTPUT=${prefix}.CollectMultipleMetrics \\
REFERENCE_SEQUENCE=$fasta
--INPUT $bam \\
--OUTPUT ${prefix}.CollectMultipleMetrics \\
--REFERENCE_SEQUENCE $fasta
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,13 +2,13 @@ process PICARD_COLLECTWGSMETRICS {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam), path(bai)
tuple val(meta), path(bam)
path fasta
output:
@ -32,9 +32,10 @@ process PICARD_COLLECTWGSMETRICS {
-Xmx${avail_mem}g \\
CollectWgsMetrics \\
$args \\
INPUT=$bam \\
OUTPUT=${prefix}.CollectWgsMetrics.coverage_metrics \\
REFERENCE_SEQUENCE=$fasta
--INPUT $bam \\
--OUTPUT ${prefix}.CollectWgsMetrics.coverage_metrics \\
--REFERENCE_SEQUENCE $fasta
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_CREATESEQUENCEDICTIONARY {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(fasta)
@ -31,8 +31,8 @@ process PICARD_CREATESEQUENCEDICTIONARY {
-Xmx${avail_mem}g \\
CreateSequenceDictionary \\
$args \\
R=$fasta \\
O=${prefix}.dict
--REFERENCE $fasta \\
--OUTPUT ${prefix}.dict
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_CROSSCHECKFINGERPRINTS {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(input1)

View file

@ -2,10 +2,10 @@ process PICARD_FILTERSAMREADS {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam), path(readlist)

View file

@ -2,10 +2,10 @@ process PICARD_FIXMATEINFORMATION {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -31,8 +31,8 @@ process PICARD_FIXMATEINFORMATION {
picard \\
FixMateInformation \\
-Xmx${avail_mem}g \\
-I ${bam} \\
-O ${prefix}.bam \\
--INPUT ${bam} \\
--OUTPUT ${prefix}.bam \\
--VALIDATION_STRINGENCY ${STRINGENCY}
cat <<-END_VERSIONS > versions.yml

View file

@ -2,10 +2,10 @@ process PICARD_LIFTOVERVCF {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(input_vcf)
@ -35,11 +35,11 @@ process PICARD_LIFTOVERVCF {
-Xmx${avail_mem}g \\
LiftoverVcf \\
$args \\
I=$input_vcf \\
O=${prefix}.lifted.vcf.gz \\
CHAIN=$chain \\
REJECT=${prefix}.unlifted.vcf.gz \\
R=$fasta
--INPUT $input_vcf \\
--OUTPUT ${prefix}.lifted.vcf.gz \\
--CHAIN $chain \\
--REJECT ${prefix}.unlifted.vcf.gz \\
--REFERENCE_SEQUENCE $fasta
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_MARKDUPLICATES {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -33,9 +33,9 @@ process PICARD_MARKDUPLICATES {
-Xmx${avail_mem}g \\
MarkDuplicates \\
$args \\
I=$bam \\
O=${prefix}.bam \\
M=${prefix}.MarkDuplicates.metrics.txt
--INPUT $bam \\
--OUTPUT ${prefix}.bam \\
--METRICS_FILE ${prefix}.MarkDuplicates.metrics.txt
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_MERGESAMFILES {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bams)
@ -33,8 +33,8 @@ process PICARD_MERGESAMFILES {
-Xmx${avail_mem}g \\
MergeSamFiles \\
$args \\
${'INPUT='+bam_files.join(' INPUT=')} \\
OUTPUT=${prefix}.bam
${'--INPUT '+bam_files.join(' --INPUT ')} \\
--OUTPUT ${prefix}.bam
cat <<-END_VERSIONS > versions.yml
"${task.process}":
picard: \$( echo \$(picard MergeSamFiles --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)

View file

@ -2,10 +2,10 @@ process PICARD_SORTSAM {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)

View file

@ -2,10 +2,10 @@ process PICARD_SORTVCF {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(vcf)
@ -22,8 +22,8 @@ process PICARD_SORTVCF {
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def seq_dict = sequence_dict ? "-SEQUENCE_DICTIONARY $sequence_dict" : ""
def reference = reference ? "-REFERENCE_SEQUENCE $reference" : ""
def seq_dict = sequence_dict ? "--SEQUENCE_DICTIONARY $sequence_dict" : ""
def reference = reference ? "--REFERENCE_SEQUENCE $reference" : ""
def avail_mem = 3
if (!task.memory) {
log.info '[Picard SortVcf] Available memory not known - defaulting to 3GB. Specify process memory requirements to change this.'

View file

@ -0,0 +1,41 @@
//
// Run QC steps on BAM/CRAM files using Picard
//
include { PICARD_COLLECTMULTIPLEMETRICS } from '../../../modules/picard/collectmultiplemetrics/main'
include { PICARD_COLLECTWGSMETRICS } from '../../../modules/picard/collectwgsmetrics/main'
include { PICARD_COLLECTHSMETRICS } from '../../../modules/picard/collecthsmetrics/main'
workflow BAM_QC_PICARD {
take:
ch_bam // channel: [ val(meta), [ bam ]]
ch_fasta // channel: [ fasta ]
ch_fasta_fai // channel: [ fasta_fai ]
ch_bait_interval // channel: [ bait_interval ]
ch_target_interval // channel: [ target_interval ]
main:
ch_versions = Channel.empty()
ch_coverage_metrics = Channel.empty()
PICARD_COLLECTMULTIPLEMETRICS( ch_bam, ch_fasta )
ch_versions = ch_versions.mix(PICARD_COLLECTMULTIPLEMETRICS.out.versions.first())
if (ch_bait_interval || ch_target_interval) {
if (!ch_bait_interval) log.error("Bait interval channel is empty")
if (!ch_target_interval) log.error("Target interval channel is empty")
PICARD_COLLECTHSMETRICS( ch_bam, ch_fasta, ch_fasta_fai, ch_bait_interval, ch_target_interval )
ch_coverage_metrics = ch_coverage_metrics.mix(PICARD_COLLECTHSMETRICS.out.metrics)
ch_versions = ch_versions.mix(PICARD_COLLECTHSMETRICS.out.versions.first())
} else {
PICARD_COLLECTWGSMETRICS( ch_bam, ch_fasta )
ch_versions = ch_versions.mix(PICARD_COLLECTWGSMETRICS.out.versions.first())
ch_coverage_metrics = ch_coverage_metrics.mix(PICARD_COLLECTWGSMETRICS.out.metrics)
}
emit:
coverage_metrics = ch_coverage_metrics // channel: [ val(meta), [ coverage_metrics ] ]
multiple_metrics = PICARD_COLLECTMULTIPLEMETRICS.out.metrics // channel: [ val(meta), [ multiple_metrics ] ]
versions = ch_versions // channel: [ versions.yml ]
}

View file

@ -0,0 +1,60 @@
name: bam_qc
description: Produces comprehensive statistics from BAM file
keywords:
- statistics
- counts
- hs_metrics
- wgs_metrics
- bam
- sam
- cram
modules:
- picard/collectmultiplemetrics
- picard/collectwgsmetrics
- picard/collecthsmetrics
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- bam:
type: file
description: BAM/CRAM/SAM file
pattern: "*.{bam,cram,sam}"
- fasta:
type: optional file
description: Reference fasta file
pattern: "*.{fasta,fa}"
- fasta_fai:
type: optional file
description: Reference fasta file index
pattern: "*.{fasta,fa}.fai"
- bait_intervals:
type: optional file
description: An interval list file that contains the locations of the baits used.
pattern: "baits.interval_list"
- target_intervals:
type: optional file
description: An interval list file that contains the locations of the targets.
pattern: "targets.interval_list"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- coverage_metrics:
type: file
description: Alignment metrics files generated by picard CollectHsMetrics or CollectWgsMetrics
pattern: "*_metrics.txt"
- multiple_metrics:
type: file
description: Alignment metrics files generated by picard CollectMultipleMetrics
pattern: "*_{metrics}"
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
authors:
- "@matthdsm"

View file

@ -603,6 +603,10 @@ elprep/filter:
- modules/elprep/filter/**
- tests/modules/elprep/filter/**
elprep/merge:
- modules/elprep/merge/**
- tests/modules/elprep/merge/**
elprep/split:
- modules/elprep/split/**
- tests/modules/elprep/split/**

View file

@ -14,6 +14,7 @@ params {
genome_paf = "${test_data_dir}/genomics/sarscov2/genome/genome.paf"
genome_sizes = "${test_data_dir}/genomics/sarscov2/genome/genome.sizes"
transcriptome_fasta = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.fasta"
proteome_fasta = "${test_data_dir}/genomics/sarscov2/genome/proteome.fasta"
transcriptome_paf = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.paf"
test_bed = "${test_data_dir}/genomics/sarscov2/genome/bed/test.bed"

View file

@ -2,13 +2,29 @@
nextflow.enable.dsl = 2
include { BAMTOOLS_SPLIT } from '../../../../modules/bamtools/split/main.nf'
include { BAMTOOLS_SPLIT as BAMTOOLS_SPLIT_SINGLE } from '../../../../modules/bamtools/split/main.nf'
include { BAMTOOLS_SPLIT as BAMTOOLS_SPLIT_MULTIPLE } from '../../../../modules/bamtools/split/main.nf'
workflow test_bamtools_split {
workflow test_bamtools_split_single_input {
input = [
[ id:'test', single_end:false ], // meta map
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true) ]
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
]
BAMTOOLS_SPLIT ( input )
BAMTOOLS_SPLIT_SINGLE ( input )
}
workflow test_bamtools_split_multiple {
input = [
[ id:'test', single_end:false ], // meta map
[
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
file(params.test_data['homo_sapiens']['illumina']['test2_paired_end_sorted_bam'], checkIfExists: true)
]
]
BAMTOOLS_SPLIT_MULTIPLE ( input )
}

View file

@ -1,10 +1,23 @@
- name: bamtools split test_bamtools_split
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
- name: bamtools split test_bamtools_split_single_input
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split_single_input -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
tags:
- bamtools/split
- bamtools
- bamtools/split
files:
- path: output/bamtools/test.paired_end.sorted.REF_chr22.bam
- path: output/bamtools/test.REF_chr22.bam
md5sum: b7dc50e0edf9c6bfc2e3b0e6d074dc07
- path: output/bamtools/test.paired_end.sorted.REF_unmapped.bam
- path: output/bamtools/test.REF_unmapped.bam
md5sum: e0754bf72c51543b2d745d96537035fb
- path: output/bamtools/versions.yml
- name: bamtools split test_bamtools_split_multiple
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split_multiple -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
tags:
- bamtools
- bamtools/split
files:
- path: output/bamtools/test.REF_chr22.bam
md5sum: 585675bea34c48ebe9db06a561d4b4fa
- path: output/bamtools/test.REF_unmapped.bam
md5sum: 16ad644c87b9471f3026bc87c98b4963
- path: output/bamtools/versions.yml

View file

@ -7,9 +7,22 @@ include { DIAMOND_BLASTP } from '../../../../modules/diamond/blastp/main.nf'
workflow test_diamond_blastp {
db = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
out_ext = 'txt'
blast_columns = 'qseqid qlen'
DIAMOND_MAKEDB ( db )
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db )
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
}
workflow test_diamond_blastp_daa {
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
out_ext = 'daa'
blast_columns = []
DIAMOND_MAKEDB ( db )
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
}

View file

@ -1,8 +1,17 @@
- name: diamond blastp
command: nextflow run ./tests/modules/diamond/blastp -entry test_diamond_blastp -c ./tests/config/nextflow.config -c ./tests/modules/diamond/blastp/nextflow.config
- name: diamond blastp test_diamond_blastp
command: nextflow run tests/modules/diamond/blastp -entry test_diamond_blastp -c tests/config/nextflow.config
tags:
- diamond
- diamond/blastp
- diamond
files:
- path: ./output/diamond/test.diamond_blastp.txt
md5sum: 3ca7f6290c1d8741c573370e6f8b4db0
- path: output/diamond/test.diamond_blastp.txt
- path: output/diamond/versions.yml
- name: diamond blastp test_diamond_blastp_daa
command: nextflow run tests/modules/diamond/blastp -entry test_diamond_blastp_daa -c tests/config/nextflow.config
tags:
- diamond/blastp
- diamond
files:
- path: output/diamond/test.diamond_blastp.daa
- path: output/diamond/versions.yml

View file

@ -7,9 +7,22 @@ include { DIAMOND_BLASTX } from '../../../../modules/diamond/blastx/main.nf'
workflow test_diamond_blastx {
db = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
out_ext = 'tfdfdt' // Nonsense file extension to check default case.
blast_columns = 'qseqid qlen'
DIAMOND_MAKEDB ( db )
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db )
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
}
workflow test_diamond_blastx_daa {
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
out_ext = 'daa'
blast_columns = []
DIAMOND_MAKEDB ( db )
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
}

View file

@ -1,8 +1,18 @@
- name: diamond blastx
command: nextflow run ./tests/modules/diamond/blastx -entry test_diamond_blastx -c ./tests/config/nextflow.config -c ./tests/modules/diamond/blastx/nextflow.config
- name: diamond blastx test_diamond_blastx
command: nextflow run tests/modules/diamond/blastx -entry test_diamond_blastx -c tests/config/nextflow.config
tags:
- diamond
- diamond/blastx
files:
- path: ./output/diamond/test.diamond_blastx.txt
md5sum: d41d8cd98f00b204e9800998ecf8427e
- path: output/diamond/test.diamond_blastx.txt
- path: output/diamond/versions.yml
- name: diamond blastx test_diamond_blastx_daa
command: nextflow run tests/modules/diamond/blastx -entry test_diamond_blastx_daa -c tests/config/nextflow.config
tags:
- diamond
- diamond/blastx
files:
- path: output/diamond/test.diamond_blastx.daa
md5sum: 0df4a833408416f32981415873facc11
- path: output/diamond/versions.yml

View file

@ -6,7 +6,7 @@ include { DIAMOND_MAKEDB } from '../../../../modules/diamond/makedb/main.nf'
workflow test_diamond_makedb {
input = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
input = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
DIAMOND_MAKEDB ( input )
}

View file

@ -1,8 +1,9 @@
- name: diamond makedb test_diamond_makedb
command: nextflow run ./tests/modules/diamond/makedb -entry test_diamond_makedb -c ./tests/config/nextflow.config -c ./tests/modules/diamond/makedb/nextflow.config
command: nextflow run tests/modules/diamond/makedb -entry test_diamond_makedb -c tests/config/nextflow.config
tags:
- diamond
- diamond/makedb
- diamond
files:
- path: output/diamond/genome.fasta.dmnd
md5sum: 2447fb376394c20d43ea3aad2aa5d15d
- path: output/diamond/proteome.fasta.dmnd
md5sum: fc28c50b202dd7a7c5451cddff2ba1f4
- path: output/diamond/versions.yml

View file

@ -0,0 +1,17 @@
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
include { ELPREP_SPLIT } from '../../../../modules/elprep/split/main.nf'
include { ELPREP_MERGE } from '../../../../modules/elprep/merge/main.nf'
workflow test_elprep_merge {
input = [
[ id:'test', single_end:false ], // meta map
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
]
ELPREP_SPLIT ( input )
ELPREP_MERGE ( ELPREP_SPLIT.out.bam )
}

View file

@ -0,0 +1,5 @@
process {
withName : ELPREP_MERGE {
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
}
}

View file

@ -0,0 +1,8 @@
- name: elprep merge test_elprep_merge
command: nextflow run tests/modules/elprep/merge -entry test_elprep_merge -c tests/config/nextflow.config
tags:
- elprep
- elprep/merge
files:
- path: output/elprep/output/test.bam
- path: output/elprep/versions.yml

View file

@ -7,4 +7,3 @@
- path: output/picard/test.bam
md5sum: 7b82f3461c2d80fc6a10385e78c9427f
- path: output/picard/versions.yml
md5sum: 8a2d176295e1343146ea433c79bb517f

View file

@ -7,4 +7,3 @@
- path: output/picard/test.bam
md5sum: a48f8e77a1480445efc57570c3a38a68
- path: output/picard/versions.yml
md5sum: e6457d7c6de51bf6f4b577eda65e57ac

View file

@ -7,7 +7,6 @@ include { PICARD_COLLECTWGSMETRICS } from '../../../../modules/picard/collectwgs
workflow test_picard_collectwgsmetrics {
input = [ [ id:'test', single_end:false ], // meta map
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam_bai'], checkIfExists: true)
]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)

View file

@ -7,4 +7,3 @@
- path: output/picard/test.dict
contains: ["SN:MT192765.1"]
- path: output/picard/versions.yml
md5sum: b3d8c7ea65b8a6d3237b153d13fe2014

View file

@ -7,4 +7,3 @@
- path: output/picard/test.bam
md5sum: 746102e8c242c0ef42e045c49d320030
- path: output/picard/versions.yml
md5sum: 4329ba7cdca8f4f6018dfd5c019ba2eb

View file

@ -1,5 +1,5 @@
process {
ext.args = "WARN_ON_MISSING_CONTIG=true"
ext.args = "--WARN_ON_MISSING_CONTIG true"
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
}

View file

@ -3,7 +3,7 @@ process {
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
withName: PICARD_MARKDUPLICATES_UNSORTED {
ext.args = 'ASSUME_SORT_ORDER=queryname'
ext.args = '--ASSUME_SORT_ORDER queryname'
}
}

View file

@ -0,0 +1,27 @@
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
include { BAM_QC_PICARD } from '../../../../subworkflows/nf-core/bam_qc_picard/main' addParams([:])
workflow test_bam_qc_picard_wgs {
input = [ [ id:'test', single_end:false ], // meta map
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
fasta_fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)
BAM_QC_PICARD ( input, fasta, fasta_fai, [], [] )
}
workflow test_bam_qc_picard_targetted {
input = [ [ id:'test', single_end:false ], // meta map
file(params.test_data['sarscov2']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
]
fasta = file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true)
fasta_fai = file(params.test_data['sarscov2']['genome']['genome_fasta_fai'], checkIfExists: true)
bait = file(params.test_data['sarscov2']['genome']['baits_interval_list'], checkIfExists: true)
target = file(params.test_data['sarscov2']['genome']['targets_interval_list'], checkIfExists: true)
BAM_QC_PICARD ( input, fasta, fasta_fai, bait, target )
}

View file

@ -0,0 +1,5 @@
process {
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
}

View file

@ -0,0 +1,33 @@
- name: bam qc picard wgs
command: nextflow run ./tests/subworkflows/nf-core/bam_qc_picard -entry test_bam_qc_picard_wgs -c tests/config/nextflow.config
tags:
- subworkflows
# - subworkflows/bam_qc_picard
# Modules
# - picard
# - picard/collectmultiplemetrics
# - picard/collectwgsmetrics
files:
- path: ./output/picard/test.CollectMultipleMetrics.alignment_summary_metrics
- path: ./output/picard/test.CollectMultipleMetrics.insert_size_metrics
- path: ./output/picard/test.CollectMultipleMetrics.base_distribution_by_cycle_metrics
- path: ./output/picard/test.CollectMultipleMetrics.quality_by_cycle_metrics
- path: ./output/picard/test.CollectMultipleMetrics.quality_distribution_metrics
- path: ./output/picard/test.CollectWgsMetrics.coverage_metrics
- name: bam qc picard targetted
command: nextflow run ./tests/subworkflows/nf-core/bam_qc_picard -entry test_bam_qc_picard_targetted -c tests/config/nextflow.config
tags:
- subworkflows
# - subworkflows/bam_qc_picard
# Modules
# - picard
# - picard/collectmultiplemetrics
# - picard/collecthsmetrics
files:
- path: ./output/picard/test.CollectMultipleMetrics.alignment_summary_metrics
- path: ./output/picard/test.CollectMultipleMetrics.insert_size_metrics
- path: ./output/picard/test.CollectMultipleMetrics.base_distribution_by_cycle_metrics
- path: ./output/picard/test.CollectMultipleMetrics.quality_by_cycle_metrics
- path: ./output/picard/test.CollectMultipleMetrics.quality_distribution_metrics
- path: ./output/picard/test.CollectHsMetrics.coverage_metrics