Merge remote-tracking branch 'origin' into motus_profile

This commit is contained in:
JIANHONG OU 2022-05-05 09:41:13 -04:00
parent 0f24e4dcff
commit c8f2c44e22
158 changed files with 2671 additions and 444 deletions

View file

@ -1,64 +0,0 @@
---
name: Bug report
about: Report something that is broken or incorrect
title: "[BUG]"
---
<!--
# nf-core/module bug report
Hi there!
Thanks for telling us about a problem with the modules.
Please delete this text and anything that's not relevant from the template below:
-->
## Check Documentation
I have checked the following places for your error:
- [ ] [nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)
- [ ] [nf-core/module documentation](https://github.com/nf-core/modules/blob/master/README.md)
## Description of the bug
<!-- A clear and concise description of what the bug is. -->
## Steps to reproduce
Steps to reproduce the behaviour:
1. Command line: <!-- [e.g. `nextflow run ...`] -->
2. See error: <!-- [Please provide your error message] -->
## Expected behaviour
<!-- A clear and concise description of what you expected to happen. -->
## Log files
Have you provided the following extra information/files:
- [ ] The command used to run the module
- [ ] The `.nextflow.log` file <!-- this is a hidden file in the directory where you launched the module -->
## System
- Hardware: <!-- [e.g. HPC, Desktop, Cloud...] -->
- Executor: <!-- [e.g. slurm, local, awsbatch...] -->
- OS: <!-- [e.g. CentOS Linux, macOS, Linux Mint...] -->
- Version <!-- [e.g. 7, 10.13.6, 18.3...] -->
## Nextflow Installation
- Version: <!-- [e.g. 19.10.0] -->
## Container engine
- Engine: <!-- [e.g. Conda, Docker, Singularity or Podman] -->
- version: <!-- [e.g. 1.0.0] -->
- Image tag: <!-- [e.g. nfcore/module:2.6] -->
## Additional context
<!-- Add any other context about the problem here. -->

52
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View file

@ -0,0 +1,52 @@
name: Bug report
description: Report something that is broken or incorrect
labels: bug
body:
- type: checkboxes
attributes:
label: Have you checked the docs?
description: I have checked the following places for my error
options:
- label: "[nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)"
required: true
- label: "[nf-core modules documentation](https://nf-co.re/docs/contributing/modules)"
required: true
- type: textarea
id: description
attributes:
label: Description of the bug
description: A clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: command_used
attributes:
label: Command used and terminal output
description: Steps to reproduce the behaviour. Please paste the command you used to launch the pipeline and the output from your terminal.
render: console
placeholder: |
$ nextflow run ...
Some output where something broke
- type: textarea
id: files
attributes:
label: Relevant files
description: |
Please drag and drop the relevant files here. Create a `.zip` archive if the extension is not allowed.
Your verbose log file `.nextflow.log` is often useful _(this is a hidden file in the directory where you launched the pipeline)_ as well as custom Nextflow configuration files.
- type: textarea
id: system
attributes:
label: System information
description: |
* Nextflow version _(eg. 21.10.3)_
* Hardware _(eg. HPC, Desktop, Cloud)_
* Executor _(eg. slurm, local, awsbatch)_
* Container engine and version: _(e.g. Docker 1.0.0, Singularity, Conda, Podman, Shifter or Charliecloud)_
* OS and version: _(eg. CentOS Linux, macOS, Ubuntu 22.04)_
* Image tag: <!-- [e.g. nfcore/cellranger:2.6] -->

View file

@ -1,32 +0,0 @@
---
name: Feature request
about: Suggest an idea for nf-core/modules
title: "[FEATURE]"
---
<!--
# nf-core/modules feature request
Hi there!
Thanks for suggesting a new feature for the modules!
Please delete this text and anything that's not relevant from the template below:
-->
## Is your feature request related to a problem? Please describe
<!-- A clear and concise description of what the problem is. -->
<!-- e.g. [I'm always frustrated when ...] -->
## Describe the solution you'd like
<!-- A clear and concise description of what you want to happen. -->
## Describe alternatives you've considered
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional context
<!-- Add any other context about the feature request here. -->

View file

@ -0,0 +1,32 @@
name: Feature request
description: Suggest an idea for nf-core/modules
labels: feature
title: "[FEATURE]"
body:
- type: textarea
id: description
attributes:
label: Is your feature request related to a problem? Please describe
description: A clear and concise description of what the bug is.
placeholder: |
<!-- e.g. [I'm always frustrated when ...] -->
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of the solution you want to happen.
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
- type: textarea
id: additional_context
attributes:
label: Additional context
description: Add any other context about the feature request here.

View file

@ -1,26 +0,0 @@
---
name: New module
about: Suggest a new module for nf-core/modules
title: "new module: TOOL/SUBTOOL"
label: new module
---
<!--
# nf-core/modules new module suggestion
Hi there!
Thanks for suggesting a new module for the modules!
Please delete this text and anything that's not relevant from the template below:
Replace TOOL with the bioconda name for the tool in the following text, so that the link is functional.
Replace TOOL/SUBTOOL in the issue title so that it's understandable.
-->
I think it would be good to have a module for [TOOL](https://bioconda.github.io/recipes/TOOL/README.html)
- [ ] This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
- [ ] There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
- [ ] There is no [open issue](https://github.com/nf-core/modules/issues) for this module
- [ ] If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module

36
.github/ISSUE_TEMPLATE/new_module.yml vendored Normal file
View file

@ -0,0 +1,36 @@
name: New module
description: Suggest a new module for nf-core/modules
title: "new module: TOOL/SUBTOOL"
labels: new module
body:
- type: checkboxes
attributes:
label: Is there an existing module for this?
description: This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
options:
- label: I have searched for the existing module
required: true
- type: checkboxes
attributes:
label: Is there an open PR for this?
description: There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
options:
- label: I have searched for existing PRs
required: true
- type: checkboxes
attributes:
label: Is there an open issue for this?
description: There is no [open issue](https://github.com/nf-core/modules/issues) for this module
options:
- label: I have searched for existing issues
required: true
- type: checkboxes
attributes:
label: Are you going to work on this?
description: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
options:
- label: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
required: false

View file

@ -27,9 +27,7 @@ process ANTISMASH_ANTISMASHLITEDOWNLOADDATABASES {
output:
path("antismash_db") , emit: database
path("css"), emit: css_dir
path("detection"), emit: detection_dir
path("modules"), emit: modules_dir
path("antismash_dir"), emit: antismash_dir
path "versions.yml", emit: versions
when:
@ -37,11 +35,19 @@ process ANTISMASH_ANTISMASHLITEDOWNLOADDATABASES {
script:
def args = task.ext.args ?: ''
conda = params.enable_conda
"""
download-antismash-databases \\
--database-dir antismash_db \\
$args
if [[ $conda = false ]]; \
then \
cp -r /usr/local/lib/python3.8/site-packages/antismash antismash_dir; \
else \
cp -r \$(python -c 'import antismash;print(antismash.__file__.split("/__")[0])') antismash_dir; \
fi
cat <<-END_VERSIONS > versions.yml
"${task.process}":
antismash-lite: \$(antismash --version | sed 's/antiSMASH //')

View file

@ -50,21 +50,11 @@ output:
type: directory
description: Download directory for antiSMASH databases
pattern: "antismash_db"
- css_dir:
- antismash_dir:
type: directory
description: |
antismash/outputs/html/css folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
pattern: "css"
- detection_dir:
type: directory
description: |
antismash/detection folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
pattern: "detection"
- modules_dir:
type: directory
description: |
antismash/modules folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
pattern: "modules"
antismash installation folder which is being modified during the antiSMASH database downloading step. The modified files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database and installation folder in pipelines.
pattern: "antismash_dir"
authors:
- "@jasmezz"

View file

@ -2,10 +2,10 @@ process BAMTOOLS_SPLIT {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::bamtools=2.5.1" : null)
conda (params.enable_conda ? "bioconda::bamtools=2.5.2" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/bamtools:2.5.1--h9a82719_9' :
'quay.io/biocontainers/bamtools:2.5.1--h9a82719_9' }"
'https://depot.galaxyproject.org/singularity/bamtools:2.5.2--hd03093a_0' :
'quay.io/biocontainers/bamtools:2.5.2--hd03093a_0' }"
input:
tuple val(meta), path(bam)
@ -20,11 +20,15 @@ process BAMTOOLS_SPLIT {
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def input_list = bam.collect{"-in $it"}.join(' ')
"""
bamtools \\
split \\
-in $bam \\
$args
merge \\
$input_list \\
| bamtools \\
split \\
-stub $prefix \\
$args
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -23,7 +23,7 @@ input:
e.g. [ id:'test', single_end:false ]
- bam:
type: file
description: A BAM file to split
description: A list of one or more BAM files to merge and then split
pattern: "*.bam"
output:
@ -43,3 +43,4 @@ output:
authors:
- "@sguizard"
- "@matthdsm"

View file

@ -29,6 +29,8 @@ process BOWTIE2_ALIGN {
def unaligned = save_unaligned ? "--un-gz ${prefix}.unmapped.fastq.gz" : ''
"""
INDEX=`find -L ./ -name "*.rev.1.bt2" | sed 's/.rev.1.bt2//'`
[ -z "\$INDEX" ] && INDEX=`find -L ./ -name "*.rev.1.bt2l" | sed 's/.rev.1.bt2l//'`
[ -z "\$INDEX" ] && echo "BT2 index files not found" 1>&2 && exit 1
bowtie2 \\
-x \$INDEX \\
-U $reads \\
@ -49,6 +51,8 @@ process BOWTIE2_ALIGN {
def unaligned = save_unaligned ? "--un-conc-gz ${prefix}.unmapped.fastq.gz" : ''
"""
INDEX=`find -L ./ -name "*.rev.1.bt2" | sed 's/.rev.1.bt2//'`
[ -z "\$INDEX" ] && INDEX=`find -L ./ -name "*.rev.1.bt2l" | sed 's/.rev.1.bt2l//'`
[ -z "\$INDEX" ] && echo "BT2 index files not found" 1>&2 && exit 1
bowtie2 \\
-x \$INDEX \\
-1 ${reads[0]} \\

View file

@ -2,20 +2,26 @@ process DIAMOND_BLASTP {
tag "$meta.id"
label 'process_medium'
// Dimaond is limited to v2.0.9 because there is not a
// singularity version higher than this at the current time.
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
input:
tuple val(meta), path(fasta)
path db
path db
val out_ext
val blast_columns
output:
tuple val(meta), path('*.txt'), emit: txt
path "versions.yml" , emit: versions
tuple val(meta), path('*.blast'), optional: true, emit: blast
tuple val(meta), path('*.xml') , optional: true, emit: xml
tuple val(meta), path('*.txt') , optional: true, emit: txt
tuple val(meta), path('*.daa') , optional: true, emit: daa
tuple val(meta), path('*.sam') , optional: true, emit: sam
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
tuple val(meta), path('*.paf') , optional: true, emit: paf
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
@ -23,6 +29,21 @@ process DIAMOND_BLASTP {
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def columns = blast_columns ? "${blast_columns}" : ''
switch ( out_ext ) {
case "blast": outfmt = 0; break
case "xml": outfmt = 5; break
case "txt": outfmt = 6; break
case "daa": outfmt = 100; break
case "sam": outfmt = 101; break
case "tsv": outfmt = 102; break
case "paf": outfmt = 103; break
default:
outfmt = '6';
out_ext = 'txt';
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
break
}
"""
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
@ -31,8 +52,9 @@ process DIAMOND_BLASTP {
--threads $task.cpus \\
--db \$DB \\
--query $fasta \\
--outfmt ${outfmt} ${columns} \\
$args \\
--out ${prefix}.txt
--out ${prefix}.${out_ext}
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -28,12 +28,50 @@ input:
type: directory
description: Directory containing the protein blast database
pattern: "*"
- out_ext:
type: string
description: |
Specify the type of output file to be generated. `blast` corresponds to
BLAST pairwise format. `xml` corresponds to BLAST xml format.
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
taxonomic classification format.
pattern: "blast|xml|txt|daa|sam|tsv|paf"
- blast_columns:
type: string
description: |
Optional space separated list of DIAMOND tabular BLAST output keywords
used for in conjunction with the 'txt' out_ext option (--outfmt 6). See
DIAMOND documnetation for more information.
output:
- txt:
- blast:
type: file
description: File containing blastp hits
pattern: "*.{blastp.txt}"
pattern: "*.{blast}"
- xml:
type: file
description: File containing blastp hits
pattern: "*.{xml}"
- txt:
type: file
description: File containing hits in tabular BLAST format.
pattern: "*.{txt}"
- daa:
type: file
description: File containing hits DAA format
pattern: "*.{daa}"
- sam:
type: file
description: File containing aligned reads in SAM format
pattern: "*.{sam}"
- tsv:
type: file
description: Tab separated file containing taxonomic classification of hits
pattern: "*.{tsv}"
- paf:
type: file
description: File containing aligned reads in pairwise mapping format format
pattern: "*.{paf}"
- versions:
type: file
description: File containing software versions
@ -41,3 +79,4 @@ output:
authors:
- "@spficklin"
- "@jfy133"

View file

@ -2,20 +2,26 @@ process DIAMOND_BLASTX {
tag "$meta.id"
label 'process_medium'
// Dimaond is limited to v2.0.9 because there is not a
// singularity version higher than this at the current time.
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
input:
tuple val(meta), path(fasta)
path db
path db
val out_ext
val blast_columns
output:
tuple val(meta), path('*.txt'), emit: txt
path "versions.yml" , emit: versions
tuple val(meta), path('*.blast'), optional: true, emit: blast
tuple val(meta), path('*.xml') , optional: true, emit: xml
tuple val(meta), path('*.txt') , optional: true, emit: txt
tuple val(meta), path('*.daa') , optional: true, emit: daa
tuple val(meta), path('*.sam') , optional: true, emit: sam
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
tuple val(meta), path('*.paf') , optional: true, emit: paf
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
@ -23,6 +29,21 @@ process DIAMOND_BLASTX {
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def columns = blast_columns ? "${blast_columns}" : ''
switch ( out_ext ) {
case "blast": outfmt = 0; break
case "xml": outfmt = 5; break
case "txt": outfmt = 6; break
case "daa": outfmt = 100; break
case "sam": outfmt = 101; break
case "tsv": outfmt = 102; break
case "paf": outfmt = 103; break
default:
outfmt = '6';
out_ext = 'txt';
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
break
}
"""
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
@ -31,8 +52,9 @@ process DIAMOND_BLASTX {
--threads $task.cpus \\
--db \$DB \\
--query $fasta \\
--outfmt ${outfmt} ${columns} \\
$args \\
--out ${prefix}.txt
--out ${prefix}.${out_ext}
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -28,12 +28,44 @@ input:
type: directory
description: Directory containing the nucelotide blast database
pattern: "*"
- out_ext:
type: string
description: |
Specify the type of output file to be generated. `blast` corresponds to
BLAST pairwise format. `xml` corresponds to BLAST xml format.
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
taxonomic classification format.
pattern: "blast|xml|txt|daa|sam|tsv|paf"
output:
- blast:
type: file
description: File containing blastp hits
pattern: "*.{blast}"
- xml:
type: file
description: File containing blastp hits
pattern: "*.{xml}"
- txt:
type: file
description: File containing blastx hits
pattern: "*.{blastx.txt}"
description: File containing hits in tabular BLAST format.
pattern: "*.{txt}"
- daa:
type: file
description: File containing hits DAA format
pattern: "*.{daa}"
- sam:
type: file
description: File containing aligned reads in SAM format
pattern: "*.{sam}"
- tsv:
type: file
description: Tab separated file containing taxonomic classification of hits
pattern: "*.{tsv}"
- paf:
type: file
description: File containing aligned reads in pairwise mapping format format
pattern: "*.{paf}"
- versions:
type: file
description: File containing software versions
@ -41,3 +73,4 @@ output:
authors:
- "@spficklin"
- "@jfy133"

View file

@ -2,12 +2,10 @@ process DIAMOND_MAKEDB {
tag "$fasta"
label 'process_medium'
// Dimaond is limited to v2.0.9 because there is not a
// singularity version higher than this at the current time.
conda (params.enable_conda ? 'bioconda::diamond=2.0.9' : null)
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
input:
path fasta

View file

@ -0,0 +1,43 @@
process ELPREP_MERGE {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::elprep=5.1.2" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/elprep:5.1.2--he881be0_0':
'quay.io/biocontainers/elprep:5.1.2--he881be0_0' }"
input:
tuple val(meta), path(bam)
output:
tuple val(meta), path("output/**.{bam,sam}") , emit: bam
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def suffix = args.contains("--output-type sam") ? "sam" : "bam"
def single_end = meta.single_end ? " --single-end" : ""
"""
# create directory and move all input so elprep can find and merge them before splitting
mkdir input
mv ${bam} input/
elprep merge \\
input/ \\
output/${prefix}.${suffix} \\
$args \\
${single_end} \\
--nr-of-threads $task.cpus
cat <<-END_VERSIONS > versions.yml
"${task.process}":
elprep: \$(elprep 2>&1 | head -n2 | tail -n1 |sed 's/^.*version //;s/ compiled.*\$//')
END_VERSIONS
"""
}

View file

@ -0,0 +1,44 @@
name: "elprep_merge"
description: Merge split bam/sam chunks in one file
keywords:
- bam
- sam
- merge
tools:
- "elprep":
description: "elPrep is a high-performance tool for preparing .sam/.bam files for variant calling in sequencing pipelines. It can be used as a drop-in replacement for SAMtools/Picard/GATK4."
homepage: "https://github.com/ExaScience/elprep"
documentation: "https://github.com/ExaScience/elprep"
tool_dev_url: "https://github.com/ExaScience/elprep"
doi: "10.1371/journal.pone.0244471"
licence: "['AGPL v3']"
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- bam:
type: file
description: List of BAM/SAM chunks to merge
pattern: "*.{bam,sam}"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
#
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- bam:
type: file
description: Merged BAM/SAM file
pattern: "*.{bam,sam}"
authors:
- "@matthdsm"

View file

@ -8,13 +8,14 @@ LABEL \
COPY environment.yml /
RUN conda env create -f /environment.yml && conda clean -a
# Add conda installation dir to PATH (instead of doing 'conda activate')
ENV PATH /opt/conda/envs/nf-core-vep-104.3/bin:$PATH
# Setup default ARG variables
ARG GENOME=GRCh38
ARG SPECIES=homo_sapiens
ARG VEP_VERSION=99
ARG VEP_VERSION=104
ARG VEP_TAG=104.3
# Add conda installation dir to PATH (instead of doing 'conda activate')
ENV PATH /opt/conda/envs/nf-core-vep-${VEP_TAG}/bin:$PATH
# Download Genome
RUN vep_install \
@ -27,4 +28,4 @@ RUN vep_install \
--NO_BIOPERL --NO_HTSLIB --NO_TEST --NO_UPDATE
# Dump the details of the installed packages to a file for posterity
RUN conda env export --name nf-core-vep-104.3 > nf-core-vep-104.3.yml
RUN conda env export --name nf-core-vep-${VEP_TAG} > nf-core-vep-${VEP_TAG}.yml

View file

@ -10,11 +10,12 @@ build_push() {
VEP_TAG=$4
docker build \
. \
-t nfcore/vep:${VEP_TAG}.${GENOME} \
software/vep/. \
--build-arg GENOME=${GENOME} \
--build-arg SPECIES=${SPECIES} \
--build-arg VEP_VERSION=${VEP_VERSION}
--build-arg VEP_VERSION=${VEP_VERSION} \
--build-arg VEP_TAG=${VEP_TAG}
docker push nfcore/vep:${VEP_TAG}.${GENOME}
}

View file

@ -13,6 +13,7 @@ process ENSEMBLVEP {
val species
val cache_version
path cache
path extra_files
output:
tuple val(meta), path("*.ann.vcf"), emit: vcf

View file

@ -10,17 +10,6 @@ tools:
homepage: https://www.ensembl.org/info/docs/tools/vep/index.html
documentation: https://www.ensembl.org/info/docs/tools/vep/script/index.html
licence: ["Apache-2.0"]
params:
- use_cache:
type: boolean
description: |
Enable the usage of containers with cache
Does not work with conda
- vep_tag:
type: value
description: |
Specify the tag for the container
https://hub.docker.com/r/nfcore/vep/tags
input:
- meta:
type: map
@ -47,6 +36,10 @@ input:
type: file
description: |
path to VEP cache (optional)
- extra_files:
type: tuple
description: |
path to file(s) needed for plugins (optional)
output:
- vcf:
type: file

View file

@ -12,7 +12,7 @@ process GATK4_MARKDUPLICATES {
output:
tuple val(meta), path("*.bam") , emit: bam
tuple val(meta), path("*.bai") , emit: bai
tuple val(meta), path("*.bai") , optional:true, emit: bai
tuple val(meta), path("*.metrics"), emit: metrics
path "versions.yml" , emit: versions

View file

@ -43,4 +43,15 @@ process GATK4_MERGEBAMALIGNMENT {
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
stub:
def prefix = task.ext.prefix ?: "${meta.id}"
"""
touch ${prefix}.bam
cat <<-END_VERSIONS > versions.yml
"${task.process}":
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
}

View file

@ -57,4 +57,18 @@ process GATK4_MUTECT2 {
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
stub:
def prefix = task.ext.prefix ?: "${meta.id}"
"""
touch ${prefix}.vcf.gz
touch ${prefix}.vcf.gz.tbi
touch ${prefix}.vcf.gz.stats
touch ${prefix}.f1r2.tar.gz
cat <<-END_VERSIONS > versions.yml
"${task.process}":
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
}

View file

@ -39,4 +39,15 @@ process GATK4_REVERTSAM {
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
stub:
def prefix = task.ext.prefix ?: "${meta.id}"
"""
touch ${prefix}.reverted.bam
cat <<-END_VERSIONS > versions.yml
"${task.process}":
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
}

View file

@ -40,4 +40,17 @@ process GATK4_SAMTOFASTQ {
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
stub:
def prefix = task.ext.prefix ?: "${meta.id}"
"""
touch ${prefix}.fastq.gz
touch ${prefix}_1.fastq.gz
touch ${prefix}_2.fastq.gz
cat <<-END_VERSIONS > versions.yml
"${task.process}":
gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
}

View file

@ -0,0 +1,42 @@
def VERSION = '0.3.14'
process HAPPY_HAPPY {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::hap.py=0.3.14" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/hap.py:0.3.14--py27h5c5a3ab_0':
'quay.io/biocontainers/hap.py:0.3.14--py27h5c5a3ab_0' }"
input:
tuple val(meta), path(truth_vcf), path(query_vcf), path(bed)
tuple path(fasta), path(fasta_fai)
output:
tuple val(meta), path('*.csv'), path('*.json') , emit: metrics
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
"""
hap.py \\
$truth_vcf \\
$query_vcf \\
$args \\
--reference $fasta \\
--threads $task.cpus \\
-R $bed \\
-o $prefix
cat <<-END_VERSIONS > versions.yml
"${task.process}":
hap.py: $VERSION
END_VERSIONS
"""
}

View file

@ -0,0 +1,67 @@
name: "happy_happy"
description: Hap.py is a tool to compare diploid genotypes at haplotype level. Rather than comparing VCF records row by row, hap.py will generate and match alternate sequences in a superlocus. A superlocus is a small region of the genome (sized between 1 and around 1000 bp) that contains one or more variants.
keywords:
- happy
- benchmark
- haplotype
tools:
- "happy":
description: "Haplotype VCF comparison tools"
homepage: "https://www.illumina.com/products/by-type/informatics-products/basespace-sequence-hub/apps/hap-py-benchmarking.html"
documentation: "https://github.com/Illumina/hap.py"
tool_dev_url: "https://github.com/Illumina/hap.py"
doi: ""
licence: "['BSD-2-clause']"
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- truth_vcf:
type: file
description: gold standard VCF file
pattern: "*.{vcf,vcf.gz}"
- query_vcf:
type: file
description: VCF/GVCF file to query
pattern: "*.{vcf,vcf.gz}"
- bed:
type: file
description: BED file
pattern: "*.bed"
- fasta:
type: file
description: FASTA file of the reference genome
pattern: "*.{fa,fasta}"
- fasta_fai:
type: file
description: The index of the reference FASTA
pattern: "*.fai"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- summary:
type: file
description: A CSV file containing the summary of the benchmarking
pattern: "*.summary.csv"
- extended:
type: file
description: A CSV file containing extended info of the benchmarking
pattern: "*.extended.csv"
- runinfo:
type: file
description: A JSON file containing the run info
pattern: "*.runinfo.json"
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
authors:
- "@nvnieuwk"

View file

@ -0,0 +1,41 @@
def VERSION = '0.3.14'
process HAPPY_PREPY {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::hap.py=0.3.14" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/hap.py:0.3.14--py27h5c5a3ab_0':
'quay.io/biocontainers/hap.py:0.3.14--py27h5c5a3ab_0' }"
input:
tuple val(meta), path(vcf), path(bed)
tuple path(fasta), path(fasta_fai)
output:
tuple val(meta), path('*.vcf.gz') , emit: preprocessed_vcf
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
"""
pre.py \\
$args \\
-R $bed \\
--reference $fasta \\
--threads $task.cpus \\
$vcf \\
${prefix}.vcf.gz
cat <<-END_VERSIONS > versions.yml
"${task.process}":
pre.py: $VERSION
END_VERSIONS
"""
}

View file

@ -0,0 +1,55 @@
name: "happy_prepy"
description: Pre.py is a preprocessing tool made to preprocess VCF files for Hap.py
keywords:
- happy
- benchmark
- haplotype
tools:
- "happy":
description: "Haplotype VCF comparison tools"
homepage: "https://www.illumina.com/products/by-type/informatics-products/basespace-sequence-hub/apps/hap-py-benchmarking.html"
documentation: "https://github.com/Illumina/hap.py"
tool_dev_url: "https://github.com/Illumina/hap.py"
doi: ""
licence: "['BSD-2-clause']"
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- vcf:
type: file
description: VCF file to preprocess
pattern: "*.{vcf,vcf.gz}"
- bed:
type: file
description: BED file
pattern: "*.bed"
- fasta:
type: file
description: FASTA file of the reference genome
pattern: "*.{fa,fasta}"
- fasta_fai:
type: file
description: The index of the reference FASTA
pattern: "*.fai"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- vcf:
type: file
description: A preprocessed VCF file
pattern: "*.vcf.gz"
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
authors:
- "@nvnieuwk"

View file

@ -18,7 +18,9 @@ process KRONA_KRONADB {
script:
def args = task.ext.args ?: ''
"""
ktUpdateTaxonomy.sh taxonomy
ktUpdateTaxonomy.sh \\
$args \\
taxonomy/
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -23,7 +23,10 @@ process KRONA_KTIMPORTTAXONOMY {
script:
def args = task.ext.args ?: ''
"""
ktImportTaxonomy "$report" -tax taxonomy
ktImportTaxonomy \\
$args \\
-tax taxonomy/ \\
"$report"
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -23,8 +23,11 @@ input:
Groovy Map containing sample information
e.g. [ id:'test']
- database:
type: path
description: "Path to the taxonomy database downloaded by krona/kronadb"
type: file
description: |
Path to the taxonomy database .tab file downloaded by krona/ktUpdateTaxonomy
The file will be saved under a folder named "taxonomy" as "taxonomy/taxonomy.tab".
The parent folder will be passed as argument to ktImportTaxonomy.
- report:
type: file
description: "A tab-delimited file with taxonomy IDs and (optionally) query IDs, magnitudes, and scores. Query IDs are taken from column 1, taxonomy IDs from column 2, and scores from column 3. Lines beginning with # will be ignored."

View file

@ -0,0 +1,34 @@
process KRONA_KTIMPORTTEXT {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::krona=2.8.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/krona:2.8.1--pl5321hdfd78af_1':
'quay.io/biocontainers/krona:2.8.1--pl5321hdfd78af_1' }"
input:
tuple val(meta), path(report)
output:
tuple val(meta), path ('*.html'), emit: html
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
"""
ktImportText \\
$args \\
-o ${prefix}.html \\
$report
cat <<-END_VERSIONS > versions.yml
"${task.process}":
krona: \$( echo \$(ktImportText 2>&1) | sed 's/^.*KronaTools //g; s/- ktImportText.*\$//g')
END_VERSIONS
"""
}

View file

@ -0,0 +1,47 @@
name: "krona_ktimporttext"
description: Creates a Krona chart from text files listing quantities and lineages.
keywords:
- plot
- taxonomy
- interactive
- html
- visualisation
- krona chart
- metagenomics
tools:
- krona:
description: Krona Tools is a set of scripts to create Krona charts from several Bioinformatics tools as well as from text and XML files.
homepage: https://github.com/marbl/Krona/wiki/KronaTools
documentation: http://manpages.ubuntu.com/manpages/impish/man1/ktImportTaxonomy.1.html
tool_dev_url: https://github.com/marbl/Krona
doi: 10.1186/1471-2105-12-385
licence: https://raw.githubusercontent.com/marbl/Krona/master/KronaTools/LICENSE.txt
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test']
- report:
type: file
description: "Tab-delimited text file. Each line should be a number followed by a list of wedges to contribute to (starting from the highest level). If no wedges are listed (and just a quantity is given), it will contribute to the top level. If the same lineage is listed more than once, the values will be added. Quantities can be omitted if -q is specified. Lines beginning with '#' will be ignored."
pattern: "*.{txt}"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test' ]
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- html:
type: file
description: A html file containing an interactive krona plot.
pattern: "*.{html}"
authors:
- "@jianhong"

View file

@ -0,0 +1,30 @@
def VERSION='2.7.1' // Version information not provided by tool on CLI
process KRONA_KTUPDATETAXONOMY {
label 'process_low'
conda (params.enable_conda ? "bioconda::krona=2.7.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/krona:2.7.1--pl526_5' :
'quay.io/biocontainers/krona:2.7.1--pl526_5' }"
output:
path 'taxonomy/taxonomy.tab', emit: db
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
"""
ktUpdateTaxonomy.sh \\
$args \\
taxonomy/
cat <<-END_VERSIONS > versions.yml
"${task.process}":
krona: $VERSION
END_VERSIONS
"""
}

View file

@ -0,0 +1,31 @@
name: krona_ktupdatetaxonomy
description: KronaTools Update Taxonomy downloads a taxonomy database
keywords:
- database
- taxonomy
- krona
- visualisation
tools:
- krona:
description: Krona Tools is a set of scripts to create Krona charts from several Bioinformatics tools as well as from text and XML files.
homepage: https://github.com/marbl/Krona/wiki/KronaTools
documentation: https://github.com/marbl/Krona/wiki/Installing
tool_dev_url:
doi: https://doi.org/10.1186/1471-2105-12-385
licence:
input:
- none: There is no input. This module downloads a pre-built taxonomy database for use with Krona Tools.
output:
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- db:
type: file
description: A TAB separated file that contains a taxonomy database.
pattern: "*.{tab}"
authors:
- "@mjakobs"

View file

@ -23,7 +23,7 @@ process METAPHLAN3 {
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def input_type = ("$input".endsWith(".fastq.gz")) ? "--input_type fastq" : ("$input".contains(".fasta")) ? "--input_type fasta" : ("$input".endsWith(".bowtie2out.txt")) ? "--input_type bowtie2out" : "--input_type sam"
def input_type = ("$input".endsWith(".fastq.gz") || "$input".endsWith(".fq.gz")) ? "--input_type fastq" : ("$input".contains(".fasta")) ? "--input_type fasta" : ("$input".endsWith(".bowtie2out.txt")) ? "--input_type bowtie2out" : "--input_type sam"
def input_data = ("$input_type".contains("fastq")) && !meta.single_end ? "${input[0]},${input[1]}" : "$input"
def bowtie2_out = "$input_type" == "--input_type bowtie2out" || "$input_type" == "--input_type sam" ? '' : "--bowtie2out ${prefix}.bowtie2out.txt"

View file

@ -27,8 +27,8 @@ process MINIMAP2_ALIGN {
def prefix = task.ext.prefix ?: "${meta.id}"
def input_reads = meta.single_end ? "$reads" : "${reads[0]} ${reads[1]}"
def bam_output = bam_format ? "-a | samtools sort | samtools view -@ ${task.cpus} -b -h -o ${prefix}.bam" : "-o ${prefix}.paf"
def cigar_paf = cigar_paf_format && !sam_format ? "-c" : ''
def set_cigar_bam = cigar_bam && sam_format ? "-L" : ''
def cigar_paf = cigar_paf_format && !bam_format ? "-c" : ''
def set_cigar_bam = cigar_bam && bam_format ? "-L" : ''
"""
minimap2 \\
$args \\

View file

@ -0,0 +1,39 @@
process MOTUS_DOWNLOADDB {
label 'process_low'
conda (params.enable_conda ? "bioconda::motus=3.0.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/motus:3.0.1--pyhdfd78af_0':
'quay.io/biocontainers/motus:3.0.1--pyhdfd78af_0' }"
input:
path motus_downloaddb_script
output:
path "db_mOTU/" , emit: db
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def software = "${motus_downloaddb_script.simpleName}_copy.py"
"""
## must copy script file to working directory,
## otherwise the reference_db will be download to bin folder
## other than current directory
cp $motus_downloaddb_script ${software}
python ${software} \\
$args \\
-t $task.cpus
## mOTUs version number is not available from command line.
## mOTUs save the version number in index database folder.
## mOTUs will check the database version is same version as exec version.
cat <<-END_VERSIONS > versions.yml
"${task.process}":
mOTUs: \$(grep motus db_mOTU/db_mOTU_versions | sed 's/motus\\t//g')
END_VERSIONS
"""
}

View file

@ -0,0 +1,39 @@
name: "motus_downloaddb"
description: Download the mOTUs database
keywords:
- classify
- metagenomics
- fastq
- taxonomic profiling
- database
- download
tools:
- "motus":
description: "The mOTU profiler is a computational tool that estimates relative taxonomic abundance of known and currently unknown microbial community members using metagenomic shotgun sequencing data."
homepage: "None"
documentation: "https://github.com/motu-tool/mOTUs/wiki"
tool_dev_url: "https://github.com/motu-tool/mOTUs"
doi: "10.1038/s41467-019-08844-4"
licence: "['GPL v3']"
input:
- motus_downloaddb:
type: directory
description: |
The mOTUs downloadDB script source file.
It is the source file installed or
remote source in github such as https://raw.githubusercontent.com/motu-tool/mOTUs/master/motus/downloadDB.py
pattern: "downloadDB.py"
output:
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- db:
type: directory
description: The mOTUs database directory
pattern: "db_mOTU"
authors:
- "@jianhong"

View file

@ -2,10 +2,10 @@ process PICARD_ADDORREPLACEREADGROUPS {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -38,12 +38,12 @@ process PICARD_ADDORREPLACEREADGROUPS {
-Xmx${avail_mem}g \\
--INPUT ${bam} \\
--OUTPUT ${prefix}.bam \\
-ID ${ID} \\
-LB ${LIBRARY} \\
-PL ${PLATFORM} \\
-PU ${BARCODE} \\
-SM ${SAMPLE} \\
-CREATE_INDEX true
--RGID ${ID} \\
--RGLB ${LIBRARY} \\
--RGPL ${PLATFORM} \\
--RGPU ${BARCODE} \\
--RGSM ${SAMPLE} \\
--CREATE_INDEX true
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_CLEANSAM {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -31,8 +31,8 @@ process PICARD_CLEANSAM {
-Xmx${avail_mem}g \\
CleanSam \\
${args} \\
-I ${bam} \\
-O ${prefix}.bam
--INPUT ${bam} \\
--OUTPUT ${prefix}.bam
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_COLLECTHSMETRICS {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -38,10 +38,10 @@ process PICARD_COLLECTHSMETRICS {
CollectHsMetrics \\
$args \\
$reference \\
-BAIT_INTERVALS $bait_intervals \\
-TARGET_INTERVALS $target_intervals \\
-INPUT $bam \\
-OUTPUT ${prefix}.CollectHsMetrics.coverage_metrics
--BAIT_INTERVALS $bait_intervals \\
--TARGET_INTERVALS $target_intervals \\
--INPUT $bam \\
--OUTPUT ${prefix}.CollectHsMetrics.coverage_metrics
cat <<-END_VERSIONS > versions.yml

View file

@ -2,10 +2,10 @@ process PICARD_COLLECTMULTIPLEMETRICS {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -33,9 +33,9 @@ process PICARD_COLLECTMULTIPLEMETRICS {
-Xmx${avail_mem}g \\
CollectMultipleMetrics \\
$args \\
INPUT=$bam \\
OUTPUT=${prefix}.CollectMultipleMetrics \\
REFERENCE_SEQUENCE=$fasta
--INPUT $bam \\
--OUTPUT ${prefix}.CollectMultipleMetrics \\
--REFERENCE_SEQUENCE $fasta
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,13 +2,13 @@ process PICARD_COLLECTWGSMETRICS {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam), path(bai)
tuple val(meta), path(bam)
path fasta
output:
@ -32,9 +32,10 @@ process PICARD_COLLECTWGSMETRICS {
-Xmx${avail_mem}g \\
CollectWgsMetrics \\
$args \\
INPUT=$bam \\
OUTPUT=${prefix}.CollectWgsMetrics.coverage_metrics \\
REFERENCE_SEQUENCE=$fasta
--INPUT $bam \\
--OUTPUT ${prefix}.CollectWgsMetrics.coverage_metrics \\
--REFERENCE_SEQUENCE $fasta
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_CREATESEQUENCEDICTIONARY {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(fasta)
@ -31,8 +31,8 @@ process PICARD_CREATESEQUENCEDICTIONARY {
-Xmx${avail_mem}g \\
CreateSequenceDictionary \\
$args \\
R=$fasta \\
O=${prefix}.dict
--REFERENCE $fasta \\
--OUTPUT ${prefix}.dict
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_CROSSCHECKFINGERPRINTS {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(input1)

View file

@ -2,10 +2,10 @@ process PICARD_FILTERSAMREADS {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam), path(readlist)

View file

@ -2,10 +2,10 @@ process PICARD_FIXMATEINFORMATION {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.9" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.9--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.9--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -31,8 +31,8 @@ process PICARD_FIXMATEINFORMATION {
picard \\
FixMateInformation \\
-Xmx${avail_mem}g \\
-I ${bam} \\
-O ${prefix}.bam \\
--INPUT ${bam} \\
--OUTPUT ${prefix}.bam \\
--VALIDATION_STRINGENCY ${STRINGENCY}
cat <<-END_VERSIONS > versions.yml

View file

@ -2,10 +2,10 @@ process PICARD_LIFTOVERVCF {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(input_vcf)
@ -35,11 +35,11 @@ process PICARD_LIFTOVERVCF {
-Xmx${avail_mem}g \\
LiftoverVcf \\
$args \\
I=$input_vcf \\
O=${prefix}.lifted.vcf.gz \\
CHAIN=$chain \\
REJECT=${prefix}.unlifted.vcf.gz \\
R=$fasta
--INPUT $input_vcf \\
--OUTPUT ${prefix}.lifted.vcf.gz \\
--CHAIN $chain \\
--REJECT ${prefix}.unlifted.vcf.gz \\
--REFERENCE_SEQUENCE $fasta
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_MARKDUPLICATES {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)
@ -33,9 +33,9 @@ process PICARD_MARKDUPLICATES {
-Xmx${avail_mem}g \\
MarkDuplicates \\
$args \\
I=$bam \\
O=${prefix}.bam \\
M=${prefix}.MarkDuplicates.metrics.txt
--INPUT $bam \\
--OUTPUT ${prefix}.bam \\
--METRICS_FILE ${prefix}.MarkDuplicates.metrics.txt
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -2,10 +2,10 @@ process PICARD_MERGESAMFILES {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bams)
@ -33,8 +33,8 @@ process PICARD_MERGESAMFILES {
-Xmx${avail_mem}g \\
MergeSamFiles \\
$args \\
${'INPUT='+bam_files.join(' INPUT=')} \\
OUTPUT=${prefix}.bam
${'--INPUT '+bam_files.join(' --INPUT ')} \\
--OUTPUT ${prefix}.bam
cat <<-END_VERSIONS > versions.yml
"${task.process}":
picard: \$( echo \$(picard MergeSamFiles --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)

View file

@ -2,10 +2,10 @@ process PICARD_SORTSAM {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(bam)

View file

@ -2,10 +2,10 @@ process PICARD_SORTVCF {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::picard=2.26.10" : null)
conda (params.enable_conda ? "bioconda::picard=2.27.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/picard:2.26.10--hdfd78af_0' :
'quay.io/biocontainers/picard:2.26.10--hdfd78af_0' }"
'https://depot.galaxyproject.org/singularity/picard:2.27.1--hdfd78af_0' :
'quay.io/biocontainers/picard:2.27.1--hdfd78af_0' }"
input:
tuple val(meta), path(vcf)
@ -22,8 +22,8 @@ process PICARD_SORTVCF {
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
def seq_dict = sequence_dict ? "-SEQUENCE_DICTIONARY $sequence_dict" : ""
def reference = reference ? "-REFERENCE_SEQUENCE $reference" : ""
def seq_dict = sequence_dict ? "--SEQUENCE_DICTIONARY $sequence_dict" : ""
def reference = reference ? "--REFERENCE_SEQUENCE $reference" : ""
def avail_mem = 3
if (!task.memory) {
log.info '[Picard SortVcf] Available memory not known - defaulting to 3GB. Specify process memory requirements to change this.'

View file

@ -45,7 +45,7 @@ process SAMTOOLS_BAM2FQ {
bam2fq \\
$args \\
-@ $task.cpus \\
$inputbam >${prefix}_interleaved.fq.gz
$inputbam | gzip --no-name > ${prefix}_interleaved.fq.gz
cat <<-END_VERSIONS > versions.yml
"${task.process}":

View file

@ -41,4 +41,16 @@ process SAMTOOLS_VIEW {
samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
stub:
def prefix = task.ext.prefix ?: "${meta.id}"
"""
touch ${prefix}.bam
touch ${prefix}.cram
cat <<-END_VERSIONS > versions.yml
"${task.process}":
samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
}

View file

@ -0,0 +1,64 @@
process SHIGATYPER {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::shigatyper=2.0.1" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/shigatyper%3A2.0.1--pyhdfd78af_0':
'quay.io/biocontainers/shigatyper:2.0.1--pyhdfd78af_0' }"
input:
tuple val(meta), path(reads)
output:
tuple val(meta), path("${prefix}.tsv") , emit: tsv
tuple val(meta), path("${prefix}-hits.tsv"), optional: true, emit: hits
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
prefix = task.ext.prefix ?: "${meta.id}"
if (meta.is_ont) {
"""
shigatyper \\
$args \\
--SE $reads \\
--ont \\
--name $prefix
cat <<-END_VERSIONS > versions.yml
"${task.process}":
shigatyper: \$(echo \$(shigatyper --version 2>&1) | sed 's/^.*ShigaTyper //' )
END_VERSIONS
"""
} else if (meta.single_end) {
"""
shigatyper \\
$args \\
--SE $reads \\
--name $prefix
cat <<-END_VERSIONS > versions.yml
"${task.process}":
shigatyper: \$(echo \$(shigatyper --version 2>&1) | sed 's/^.*ShigaTyper //' )
END_VERSIONS
"""
} else {
"""
shigatyper \\
$args \\
--R1 ${reads[0]} \\
--R2 ${reads[1]} \\
--name $prefix
cat <<-END_VERSIONS > versions.yml
"${task.process}":
shigatyper: \$(echo \$(shigatyper --version 2>&1) | sed 's/^.*ShigaTyper //' )
END_VERSIONS
"""
}
}

View file

@ -0,0 +1,47 @@
name: "shigatyper"
description: Determine Shigella serotype from Illumina or Oxford Nanopore reads
keywords:
- fastq
- shigella
- serotype
tools:
- "shigatyper":
description: "Typing tool for Shigella spp. from WGS Illumina sequencing"
homepage: "https://github.com/CFSAN-Biostatistics/shigatyper"
documentation: "https://github.com/CFSAN-Biostatistics/shigatyper"
tool_dev_url: "https://github.com/CFSAN-Biostatistics/shigatyper"
doi: "10.1128/AEM.00165-19"
licence: "['Public Domain']"
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false, is_ont:false ]
- reads:
type: file
description: Illumina or Nanopore FASTQ file
pattern: "*.fastq.gz"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- tsv:
type: file
description: A TSV formatted file with ShigaTyper results
pattern: "*.tsv"
- hits:
type: file
description: A TSV formatted file with individual gene hits
pattern: "*-hits.tsv"
authors:
- "@rpetit3"

52
modules/slimfastq/main.nf Normal file
View file

@ -0,0 +1,52 @@
def VERSION = '2.04'
process SLIMFASTQ {
tag "$meta.id"
label 'process_low'
conda (params.enable_conda ? "bioconda::slimfastq=2.04" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/slimfastq:2.04--h87f3376_2':
'quay.io/biocontainers/slimfastq:2.04--h87f3376_2' }"
input:
tuple val(meta), path(fastq)
output:
tuple val(meta), path("*.sfq"), emit: sfq
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
if (meta.single_end) {
"""
gzip -d -c '${fastq}' | slimfastq \\
$args \\
-f '${prefix}.sfq'
cat <<-END_VERSIONS > versions.yml
"${task.process}":
slimfastq: ${VERSION}
END_VERSIONS
"""
} else {
"""
gzip -d -c '${fastq[0]}' | slimfastq \\
$args \\
-f '${prefix}_1.sfq'
gzip -d -c '${fastq[1]}' | slimfastq \\
$args \\
-f '${prefix}_2.sfq'
cat <<-END_VERSIONS > versions.yml
"${task.process}":
slimfastq: ${VERSION}
END_VERSIONS
"""
}
}

View file

@ -0,0 +1,41 @@
name: "slimfastq"
description: Fast, efficient, lossless compression of FASTQ files.
keywords:
- FASTQ
- compression
- lossless
tools:
- "slimfastq":
description: "slimfastq efficiently compresses/decompresses FASTQ files"
homepage: "https://github.com/Infinidat/slimfastq"
tool_dev_url: "https://github.com/Infinidat/slimfastq"
licence: "['BSD-3-clause']"
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- fastq:
type: file
description: Either a single-end FASTQ file or paired-end files.
pattern: "*.{fq.gz,fastq.gz}"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- sfq:
type: file
description: Either one or two sequence files in slimfastq compressed format.
pattern: "*.{sfq}"
authors:
- "@Midnighter"

View file

@ -8,15 +8,16 @@ LABEL \
COPY environment.yml /
RUN conda env create -f /environment.yml && conda clean -a
# Add conda installation dir to PATH (instead of doing 'conda activate')
ENV PATH /opt/conda/envs/nf-core-snpeff-5.0/bin:$PATH
# Setup default ARG variables
ARG GENOME=GRCh38
ARG SNPEFF_CACHE_VERSION=99
ARG SNPEFF_TAG=99
# Add conda installation dir to PATH (instead of doing 'conda activate')
ENV PATH /opt/conda/envs/nf-core-snpeff-${SNPEFF_TAG}/bin:$PATH
# Download Genome
RUN snpEff download -v ${GENOME}.${SNPEFF_CACHE_VERSION}
# Dump the details of the installed packages to a file for posterity
RUN conda env export --name nf-core-snpeff-5.0 > nf-core-snpeff-5.0.yml
RUN conda env export --name nf-core-snpeff-${SNPEFF_TAG} > nf-core-snpeff-${SNPEFF_TAG}.yml

5
modules/snpeff/build.sh Executable file → Normal file
View file

@ -9,10 +9,11 @@ build_push() {
SNPEFF_TAG=$3
docker build \
. \
-t nfcore/snpeff:${SNPEFF_TAG}.${GENOME} \
software/snpeff/. \
--build-arg GENOME=${GENOME} \
--build-arg SNPEFF_CACHE_VERSION=${SNPEFF_CACHE_VERSION}
--build-arg SNPEFF_CACHE_VERSION=${SNPEFF_CACHE_VERSION} \
--build-arg SNPEFF_TAG=${SNPEFF_TAG}
docker push nfcore/snpeff:${SNPEFF_TAG}.${GENOME}
}

View file

@ -10,18 +10,6 @@ tools:
homepage: https://pcingola.github.io/SnpEff/
documentation: https://pcingola.github.io/SnpEff/se_introduction/
licence: ["MIT"]
params:
- use_cache:
type: boolean
description: |
boolean to enable the usage of containers with cache
Enable the usage of containers with cache
Does not work with conda
- snpeff_tag:
type: value
description: |
Specify the tag for the container
https://hub.docker.com/r/nfcore/snpeff/tags
input:
- meta:
type: map

View file

@ -0,0 +1,47 @@
process SRST2_SRST2 {
tag "${meta.id}"
label 'process_low'
conda (params.enable_conda ? "bioconda::srst2=0.2.0" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/srst2%3A0.2.0--py27_2':
'quay.io/biocontainers/srst2:0.2.0--py27_2'}"
input:
tuple val(meta), path(fastq_s), path(db)
output:
tuple val(meta), path("*_genes_*_results.txt") , optional:true, emit: gene_results
tuple val(meta), path("*_fullgenes_*_results.txt") , optional:true, emit: fullgene_results
tuple val(meta), path("*_mlst_*_results.txt") , optional:true, emit: mlst_results
tuple val(meta), path("*.pileup") , emit: pileup
tuple val(meta), path("*.sorted.bam") , emit: sorted_bam
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ""
def prefix = task.ext.prefix ?: "${meta.id}"
def read_s = meta.single_end ? "--input_se ${fastq_s}" : "--input_pe ${fastq_s[0]} ${fastq_s[1]}"
if (meta.db=="gene") {
database = "--gene_db ${db}"
} else if (meta.db=="mlst") {
database = "--mlst_db ${db}"
} else {
error "Please set meta.db to either \"gene\" or \"mlst\""
}
"""
srst2 \\
${read_s} \\
--threads $task.cpus \\
--output ${prefix} \\
${database} \\
$args
cat <<-END_VERSIONS > versions.yml
"${task.process}":
srst2: \$(echo \$(srst2 --version 2>&1) | sed 's/srst2 //' ))
END_VERSIONS
"""
}

View file

@ -0,0 +1,72 @@
name: srst2_srst2
description: |
Short Read Sequence Typing for Bacterial Pathogens is a program designed to take Illumina sequence data,
a MLST database and/or a database of gene sequences (e.g. resistance genes, virulence genes, etc)
and report the presence of STs and/or reference genes.
keywords:
- mlst
- typing
- illumina
tools:
- srst2:
description: "Short Read Sequence Typing for Bacterial Pathogens"
homepage: "http://katholt.github.io/srst2/"
documentation: "https://github.com/katholt/srst2/blob/master/README.md"
tool_dev_url: "https://github.com/katholt/srst2"
doi: "10.1186/s13073-014-0090-6"
licence: ["BSD"]
input:
- meta:
type: map0.2.0-4
description: |
Groovy Map containing sample information
id: should be the identification number or sample name
single_end: should be true for single end data and false for paired in data
db: should be either 'gene' to use the --gene_db option or "mlst" to use the --mlst_db option
e.g. [ id:'sample', single_end:false , db:'gene']
- fasta:
type: file
description: |
gzipped fasta file. If files are NOT in
MiSeq format sample_S1_L001_R1_001.fastq.gz uses --forward and --reverse parameters; otherwise
default is _1, i.e. expect forward reads as sample_1.fastq.gz).
pattern: "*.fastq.gz"
- db:
type: file
description: Database in FASTA format
pattern: "*.fasta"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'sample', single_end:false ]
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- txt:
type: file
description: A detailed report, with one row per gene per sample described here github.com/katholt/srst2#gene-typing
pattern: "*_fullgenes_*_results.txt"
- txt:
type: file
description: A tabulated summary report of samples x genes.
pattern: "*_genes_*_results.txt"
- txt:
type: file
description: A tabulated summary report of mlst subtyping.
pattern: "*_mlst_*_results.txt"
- bam:
type: file
description: Sorted BAM file
pattern: "*.sorted.bam"
- pileup:
type: file
description: SAMtools pileup file
pattern: "*.pileup"
authors:
- "@jvhagey"

View file

@ -11,7 +11,8 @@ process TABIX_TABIX {
tuple val(meta), path(tab)
output:
tuple val(meta), path("*.tbi"), emit: tbi
tuple val(meta), path("*.tbi"), optional:true, emit: tbi
tuple val(meta), path("*.csi"), optional:true, emit: csi
path "versions.yml" , emit: versions
when:

View file

@ -31,6 +31,10 @@ output:
type: file
description: tabix index file
pattern: "*.{tbi}"
- csi:
type: file
description: coordinate sorted index file
pattern: "*.{csi}"
- versions:
type: file
description: File containing software versions

View file

@ -0,0 +1,49 @@
def VERSION = '1.8.3'
process VARDICTJAVA {
tag "$meta.id"
label 'process_medium'
conda (params.enable_conda ? "bioconda::vardict-java=1.8.3" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/vardict-java:1.8.3--hdfd78af_0':
'quay.io/biocontainers/vardict-java:1.8.3--hdfd78af_0' }"
input:
tuple val(meta), path(bam), path(bai), path(bed)
tuple path(fasta), path(fasta_fai)
output:
tuple val(meta), path("*.vcf.gz"), emit: vcf
path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script:
def args = task.ext.args ?: ''
def args2 = task.ext.args2 ?: ''
def prefix = task.ext.prefix ?: "${meta.id}"
"""
vardict-java \\
$args \\
-c 1 -S 2 -E 3 \\
-b $bam \\
-th $task.cpus \\
-N $prefix \\
-G $fasta \\
$bed \\
| teststrandbias.R \\
| var2vcf_valid.pl \\
$args2 \\
-N $prefix \\
| gzip -c > ${prefix}.vcf.gz
cat <<-END_VERSIONS > versions.yml
"${task.process}":
vardict-java: $VERSION
var2vcf_valid.pl: \$(echo \$(var2vcf_valid.pl -h | sed -n 2p | awk '{ print \$2 }'))
END_VERSIONS
"""
}

View file

@ -0,0 +1,60 @@
name: "vardictjava"
description: The Java port of the VarDict variant caller
keywords:
- variant calling
- VarDict
- AstraZeneca
tools:
- "vardictjava":
description: "Java port of the VarDict variant discovery program"
homepage: "https://github.com/AstraZeneca-NGS/VarDictJava"
documentation: "https://github.com/AstraZeneca-NGS/VarDictJava"
tool_dev_url: "https://github.com/AstraZeneca-NGS/VarDictJava"
doi: "10.1093/nar/gkw227 "
licence: "['MIT']"
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- bam:
type: file
description: BAM/SAM file
pattern: "*.{bam,sam}"
- bai:
type: file
description: Index of the BAM file
pattern: "*.bai"
- fasta:
type: file
description: FASTA of the reference genome
pattern: "*.{fa,fasta}"
- fasta_fai:
type: file
description: The index of the FASTA of the reference genome
pattern: "*.fai"
- bed:
type: file
description: BED with the regions of interest
pattern: "*.bed"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- vcf:
type: file
description: VCF file output
pattern: "*.vcf.gz"
authors:
- "@nvnieuwk"

View file

@ -0,0 +1,31 @@
//
// Run VEP to annotate VCF files
//
include { ENSEMBLVEP } from '../../../../modules/ensemblvep/main'
include { TABIX_BGZIPTABIX } from '../../../../modules/tabix/bgziptabix/main'
workflow ANNOTATION_ENSEMBLVEP {
take:
vcf // channel: [ val(meta), vcf ]
vep_genome // value: genome to use
vep_species // value: species to use
vep_cache_version // value: cache version to use
vep_cache // path: /path/to/vep/cache (optionnal)
vep_extra_files // channel: [ file1, file2...] (optionnal)
main:
ch_versions = Channel.empty()
ENSEMBLVEP(vcf, vep_genome, vep_species, vep_cache_version, vep_cache, vep_extra_files)
TABIX_BGZIPTABIX(ENSEMBLVEP.out.vcf)
// Gather versions of all tools used
ch_versions = ch_versions.mix(ENSEMBLVEP.out.versions.first())
ch_versions = ch_versions.mix(TABIX_BGZIPTABIX.out.versions.first())
emit:
vcf_tbi = TABIX_BGZIPTABIX.out.gz_tbi // channel: [ val(meta), vcf.gz, vcf.gz.tbi ]
reports = ENSEMBLVEP.out.report // path: *.html
versions = ch_versions // path: versions.yml
}

View file

@ -0,0 +1,49 @@
name: annotation_ensemblvep
description: |
Perform annotation with ensemblvep and bgzip + tabix index the resulting VCF file
keywords:
- ensemblvep
modules:
- ensemblvep
- tabix/bgziptabix
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- vcf:
type: file
description: |
vcf to annotate
- genome:
type: value
description: |
which genome to annotate with
- species:
type: value
description: |
which species to annotate with
- cache_version:
type: value
description: |
which version of the cache to annotate with
- cache:
type: file
description: |
path to VEP cache (optional)
- extra_files:
type: tuple
description: |
path to file(s) needed for plugins (optional)
output:
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- vcf_tbi:
type: file
description: Compressed vcf file + tabix index
pattern: "[ *{.vcf.gz,vcf.gz.tbi} ]"
authors:
- "@maxulysse"

View file

@ -0,0 +1,28 @@
//
// Run SNPEFF to annotate VCF files
//
include { SNPEFF } from '../../../../modules/snpeff/main'
include { TABIX_BGZIPTABIX } from '../../../../modules/tabix/bgziptabix/main'
workflow ANNOTATION_SNPEFF {
take:
vcf // channel: [ val(meta), vcf ]
snpeff_db // value: db version to use
snpeff_cache // path: /path/to/snpeff/cache (optionnal)
main:
ch_versions = Channel.empty()
SNPEFF(vcf, snpeff_db, snpeff_cache)
TABIX_BGZIPTABIX(SNPEFF.out.vcf)
// Gather versions of all tools used
ch_versions = ch_versions.mix(SNPEFF.out.versions.first())
ch_versions = ch_versions.mix(TABIX_BGZIPTABIX.out.versions.first())
emit:
vcf_tbi = TABIX_BGZIPTABIX.out.gz_tbi // channel: [ val(meta), vcf.gz, vcf.gz.tbi ]
reports = SNPEFF.out.report // path: *.html
versions = ch_versions // path: versions.yml
}

View file

@ -11,11 +11,19 @@ input:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test' ]
- input:
type: vcf
description: list containing one vcf file
pattern: "[ *.{vcf,vcf.gz} ]"
e.g. [ id:'test', single_end:false ]
- vcf:
type: file
description: |
vcf to annotate
- db:
type: value
description: |
which db to annotate with
- cache:
type: file
description: |
path to snpEff cache (optional)
output:
- versions:
type: file

View file

@ -1,26 +0,0 @@
//
// Run VEP to annotate VCF files
//
include { ENSEMBLVEP } from '../../../modules/ensemblvep/main'
include { TABIX_BGZIPTABIX as ANNOTATION_BGZIPTABIX } from '../../../modules/tabix/bgziptabix/main'
workflow ANNOTATION_ENSEMBLVEP {
take:
vcf // channel: [ val(meta), vcf ]
vep_genome // value: which genome
vep_species // value: which species
vep_cache_version // value: which cache version
vep_cache // path: path_to_vep_cache (optionnal)
main:
ENSEMBLVEP(vcf, vep_genome, vep_species, vep_cache_version, vep_cache)
ANNOTATION_BGZIPTABIX(ENSEMBLVEP.out.vcf)
ch_versions = ENSEMBLVEP.out.versions.first().mix(ANNOTATION_BGZIPTABIX.out.versions.first())
emit:
vcf_tbi = ANNOTATION_BGZIPTABIX.out.gz_tbi // channel: [ val(meta), vcf.gz, vcf.gz.tbi ]
reports = ENSEMBLVEP.out.report // path: *.html
versions = ch_versions // path: versions.yml
}

View file

@ -1,29 +0,0 @@
name: annotation_ensemblvep
description: |
Perform annotation with ensemblvep and bgzip + tabix index the resulting VCF file
keywords:
- ensemblvep
modules:
- ensemblvep
- tabix/bgziptabix
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test' ]
- input:
type: vcf
description: list containing one vcf file
pattern: "[ *.{vcf,vcf.gz} ]"
output:
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
- vcf_tbi:
type: file
description: Compressed vcf file + tabix index
pattern: "[ *{.vcf.gz,vcf.gz.tbi} ]"
authors:
- "@maxulysse"

View file

@ -1,23 +0,0 @@
//
// Run SNPEFF to annotate VCF files
//
include { SNPEFF } from '../../../modules/snpeff/main'
include { TABIX_BGZIPTABIX as ANNOTATION_BGZIPTABIX } from '../../../modules/tabix/bgziptabix/main'
workflow ANNOTATION_SNPEFF {
take:
vcf // channel: [ val(meta), vcf ]
snpeff_db // value: version of db to use
snpeff_cache // path: path_to_snpeff_cache (optionnal)
main:
SNPEFF(vcf, snpeff_db, snpeff_cache)
ANNOTATION_BGZIPTABIX(SNPEFF.out.vcf)
ch_versions = SNPEFF.out.versions.first().mix(ANNOTATION_BGZIPTABIX.out.versions.first())
emit:
vcf_tbi = ANNOTATION_BGZIPTABIX.out.gz_tbi // channel: [ val(meta), vcf.gz, vcf.gz.tbi ]
reports = SNPEFF.out.report // path: *.html
versions = ch_versions // path: versions.yml
}

View file

@ -0,0 +1,41 @@
//
// Run QC steps on BAM/CRAM files using Picard
//
include { PICARD_COLLECTMULTIPLEMETRICS } from '../../../modules/picard/collectmultiplemetrics/main'
include { PICARD_COLLECTWGSMETRICS } from '../../../modules/picard/collectwgsmetrics/main'
include { PICARD_COLLECTHSMETRICS } from '../../../modules/picard/collecthsmetrics/main'
workflow BAM_QC_PICARD {
take:
ch_bam // channel: [ val(meta), [ bam ]]
ch_fasta // channel: [ fasta ]
ch_fasta_fai // channel: [ fasta_fai ]
ch_bait_interval // channel: [ bait_interval ]
ch_target_interval // channel: [ target_interval ]
main:
ch_versions = Channel.empty()
ch_coverage_metrics = Channel.empty()
PICARD_COLLECTMULTIPLEMETRICS( ch_bam, ch_fasta )
ch_versions = ch_versions.mix(PICARD_COLLECTMULTIPLEMETRICS.out.versions.first())
if (ch_bait_interval || ch_target_interval) {
if (!ch_bait_interval) log.error("Bait interval channel is empty")
if (!ch_target_interval) log.error("Target interval channel is empty")
PICARD_COLLECTHSMETRICS( ch_bam, ch_fasta, ch_fasta_fai, ch_bait_interval, ch_target_interval )
ch_coverage_metrics = ch_coverage_metrics.mix(PICARD_COLLECTHSMETRICS.out.metrics)
ch_versions = ch_versions.mix(PICARD_COLLECTHSMETRICS.out.versions.first())
} else {
PICARD_COLLECTWGSMETRICS( ch_bam, ch_fasta )
ch_versions = ch_versions.mix(PICARD_COLLECTWGSMETRICS.out.versions.first())
ch_coverage_metrics = ch_coverage_metrics.mix(PICARD_COLLECTWGSMETRICS.out.metrics)
}
emit:
coverage_metrics = ch_coverage_metrics // channel: [ val(meta), [ coverage_metrics ] ]
multiple_metrics = PICARD_COLLECTMULTIPLEMETRICS.out.metrics // channel: [ val(meta), [ multiple_metrics ] ]
versions = ch_versions // channel: [ versions.yml ]
}

View file

@ -0,0 +1,60 @@
name: bam_qc
description: Produces comprehensive statistics from BAM file
keywords:
- statistics
- counts
- hs_metrics
- wgs_metrics
- bam
- sam
- cram
modules:
- picard/collectmultiplemetrics
- picard/collectwgsmetrics
- picard/collecthsmetrics
input:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- bam:
type: file
description: BAM/CRAM/SAM file
pattern: "*.{bam,cram,sam}"
- fasta:
type: optional file
description: Reference fasta file
pattern: "*.{fasta,fa}"
- fasta_fai:
type: optional file
description: Reference fasta file index
pattern: "*.{fasta,fa}.fai"
- bait_intervals:
type: optional file
description: An interval list file that contains the locations of the baits used.
pattern: "baits.interval_list"
- target_intervals:
type: optional file
description: An interval list file that contains the locations of the targets.
pattern: "targets.interval_list"
output:
- meta:
type: map
description: |
Groovy Map containing sample information
e.g. [ id:'test', single_end:false ]
- coverage_metrics:
type: file
description: Alignment metrics files generated by picard CollectHsMetrics or CollectWgsMetrics
pattern: "*_metrics.txt"
- multiple_metrics:
type: file
description: Alignment metrics files generated by picard CollectMultipleMetrics
pattern: "*_{metrics}"
- versions:
type: file
description: File containing software versions
pattern: "versions.yml"
authors:
- "@matthdsm"

View file

@ -603,6 +603,10 @@ elprep/filter:
- modules/elprep/filter/**
- tests/modules/elprep/filter/**
elprep/merge:
- modules/elprep/merge/**
- tests/modules/elprep/merge/**
elprep/split:
- modules/elprep/split/**
- tests/modules/elprep/split/**
@ -887,6 +891,14 @@ hamronization/summarize:
- modules/hamronization/summarize/**
- tests/modules/hamronization/summarize/**
happy/happy:
- modules/happy/happy/**
- tests/modules/happy/happy/**
happy/prepy:
- modules/happy/prepy/**
- tests/modules/happy/prepy/**
hicap:
- modules/hicap/**
- tests/modules/hicap/**
@ -1046,10 +1058,18 @@ krona/kronadb:
- modules/krona/kronadb/**
- tests/modules/krona/kronadb/**
krona/ktupdatetaxonomy:
- modules/krona/ktupdatetaxonomy/**
- tests/modules/krona/ktupdatetaxonomy/**
krona/ktimporttaxonomy:
- modules/krona/ktimporttaxonomy/**
- tests/modules/krona/ktimporttaxonomy/**
krona/ktimporttext:
- modules/krona/ktimporttext/**
- tests/modules/krona/ktimporttext/**
last/dotplot:
- modules/last/dotplot/**
- tests/modules/last/dotplot/**
@ -1234,6 +1254,10 @@ mosdepth:
- modules/mosdepth/**
- tests/modules/mosdepth/**
motus/downloaddb:
- modules/motus/downloaddb/**
- tests/modules/motus/downloaddb/**
motus/profile:
- modules/motus/profile/**
- tests/modules/motus/profile/**
@ -1715,6 +1739,10 @@ seqwish/induce:
- modules/seqwish/induce/**
- tests/modules/seqwish/induce/**
shigatyper:
- modules/shigatyper/**
- tests/modules/shigatyper/**
shovill:
- modules/shovill/**
- tests/modules/shovill/**
@ -1723,6 +1751,10 @@ sistr:
- modules/sistr/**
- tests/modules/sistr/**
slimfastq:
- modules/slimfastq/**
- tests/modules/slimfastq/**
snapaligner/index:
- modules/snapaligner/index/**
- tests/modules/snapaligner/index/**
@ -1771,6 +1803,10 @@ sratools/prefetch:
- modules/sratools/prefetch/**
- tests/modules/sratools/prefetch/**
srst2/srst2:
- modules/srst2/srst2/**
- tests/modules/srst2/srst2/**
ssuissero:
- modules/ssuissero/**
- tests/modules/ssuissero/**
@ -1912,6 +1948,10 @@ unzip:
- modules/unzip/**
- tests/modules/unzip/**
vardictjava:
- modules/vardictjava/**
- tests/modules/vardictjava/**
variantbam:
- modules/variantbam/**
- tests/modules/variantbam/**

View file

@ -14,6 +14,7 @@ params {
genome_paf = "${test_data_dir}/genomics/sarscov2/genome/genome.paf"
genome_sizes = "${test_data_dir}/genomics/sarscov2/genome/genome.sizes"
transcriptome_fasta = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.fasta"
proteome_fasta = "${test_data_dir}/genomics/sarscov2/genome/proteome.fasta"
transcriptome_paf = "${test_data_dir}/genomics/sarscov2/genome/transcriptome.paf"
test_bed = "${test_data_dir}/genomics/sarscov2/genome/bed/test.bed"
@ -109,6 +110,11 @@ params {
test_sequencing_summary = "${test_data_dir}/genomics/sarscov2/nanopore/sequencing_summary/test.sequencing_summary.txt"
}
'metagenome' {
classified_reads_assignment = "${test_data_dir}/genomics/sarscov2/metagenome/test_1.kraken2.reads.txt"
kraken_report = "${test_data_dir}/genomics/sarscov2/metagenome/test_1.kraken2.report.txt"
krona_taxonomy = "${test_data_dir}/genomics/sarscov2/metagenome/krona_taxonomy.tab"
}
}
'homo_sapiens' {
'genome' {

View file

@ -1,8 +1,8 @@
- name: antismash antismashlitedownloaddatabases test_antismash_antismashlitedownloaddatabases
command: nextflow run tests/modules/antismash/antismashlitedownloaddatabases -entry test_antismash_antismashlitedownloaddatabases -c tests/config/nextflow.config
tags:
- antismash
- antismash/antismashlitedownloaddatabases
- antismash
files:
- path: output/antismash/versions.yml
md5sum: 24859c67023abab99de295d3675a24b6
@ -12,6 +12,5 @@
- path: output/antismash/antismash_db/pfam
- path: output/antismash/antismash_db/resfam
- path: output/antismash/antismash_db/tigrfam
- path: output/antismash/css
- path: output/antismash/detection
- path: output/antismash/modules
- path: output/antismash/antismash_dir
- path: output/antismash/antismash_dir/detection/hmm_detection/data/bgc_seeds.hmm

View file

@ -2,13 +2,29 @@
nextflow.enable.dsl = 2
include { BAMTOOLS_SPLIT } from '../../../../modules/bamtools/split/main.nf'
include { BAMTOOLS_SPLIT as BAMTOOLS_SPLIT_SINGLE } from '../../../../modules/bamtools/split/main.nf'
include { BAMTOOLS_SPLIT as BAMTOOLS_SPLIT_MULTIPLE } from '../../../../modules/bamtools/split/main.nf'
workflow test_bamtools_split {
workflow test_bamtools_split_single_input {
input = [
[ id:'test', single_end:false ], // meta map
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true) ]
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
]
BAMTOOLS_SPLIT ( input )
BAMTOOLS_SPLIT_SINGLE ( input )
}
workflow test_bamtools_split_multiple {
input = [
[ id:'test', single_end:false ], // meta map
[
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true),
file(params.test_data['homo_sapiens']['illumina']['test2_paired_end_sorted_bam'], checkIfExists: true)
]
]
BAMTOOLS_SPLIT_MULTIPLE ( input )
}

View file

@ -1,10 +1,23 @@
- name: bamtools split test_bamtools_split
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
- name: bamtools split test_bamtools_split_single_input
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split_single_input -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
tags:
- bamtools/split
- bamtools
- bamtools/split
files:
- path: output/bamtools/test.paired_end.sorted.REF_chr22.bam
- path: output/bamtools/test.REF_chr22.bam
md5sum: b7dc50e0edf9c6bfc2e3b0e6d074dc07
- path: output/bamtools/test.paired_end.sorted.REF_unmapped.bam
- path: output/bamtools/test.REF_unmapped.bam
md5sum: e0754bf72c51543b2d745d96537035fb
- path: output/bamtools/versions.yml
- name: bamtools split test_bamtools_split_multiple
command: nextflow run ./tests/modules/bamtools/split -entry test_bamtools_split_multiple -c ./tests/config/nextflow.config -c ./tests/modules/bamtools/split/nextflow.config
tags:
- bamtools
- bamtools/split
files:
- path: output/bamtools/test.REF_chr22.bam
md5sum: 585675bea34c48ebe9db06a561d4b4fa
- path: output/bamtools/test.REF_unmapped.bam
md5sum: 16ad644c87b9471f3026bc87c98b4963
- path: output/bamtools/versions.yml

View file

@ -32,4 +32,4 @@ workflow test_bowtie2_align_paired_end {
BOWTIE2_BUILD ( fasta )
BOWTIE2_ALIGN ( input, BOWTIE2_BUILD.out.index, save_unaligned )
}
}

View file

@ -1,5 +1,16 @@
params {
force_large_index = false
}
process {
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
}
if (params.force_large_index) {
process {
withName: BOWTIE2_BUILD {
ext.args = '--large-index'
}
}
}

View file

@ -39,3 +39,45 @@
md5sum: 52be6950579598a990570fbcf5372184
- path: ./output/bowtie2/bowtie2/genome.rev.2.bt2
md5sum: e3b4ef343dea4dd571642010a7d09597
- name: bowtie2 align single-end large-index
command: nextflow run ./tests/modules/bowtie2/align -entry test_bowtie2_align_single_end -c ./tests/config/nextflow.config -c ./tests/modules/bowtie2/align/nextflow.config --force_large_index
tags:
- bowtie2
- bowtie2/align
files:
- path: ./output/bowtie2/test.bam
- path: ./output/bowtie2/test.bowtie2.log
- path: ./output/bowtie2/bowtie2/genome.3.bt2l
md5sum: 8952b3e0b1ce9a7a5916f2e147180853
- path: ./output/bowtie2/bowtie2/genome.2.bt2l
md5sum: 22c284084784a0720989595e0c9461fd
- path: ./output/bowtie2/bowtie2/genome.1.bt2l
md5sum: 07d811cd4e350d56267183d2ac7023a5
- path: ./output/bowtie2/bowtie2/genome.4.bt2l
md5sum: c25be5f8b0378abf7a58c8a880b87626
- path: ./output/bowtie2/bowtie2/genome.rev.1.bt2l
md5sum: fda48e35925fb24d1c0785f021981e25
- path: ./output/bowtie2/bowtie2/genome.rev.2.bt2l
md5sum: 802c26d32b970e1b105032b7ce7348b4
- name: bowtie2 align paired-end large-index
command: nextflow run ./tests/modules/bowtie2/align -entry test_bowtie2_align_paired_end -c ./tests/config/nextflow.config -c ./tests/modules/bowtie2/align/nextflow.config --force_large_index
tags:
- bowtie2
- bowtie2/align
files:
- path: ./output/bowtie2/test.bam
- path: ./output/bowtie2/test.bowtie2.log
- path: ./output/bowtie2/bowtie2/genome.3.bt2l
md5sum: 8952b3e0b1ce9a7a5916f2e147180853
- path: ./output/bowtie2/bowtie2/genome.2.bt2l
md5sum: 22c284084784a0720989595e0c9461fd
- path: ./output/bowtie2/bowtie2/genome.1.bt2l
md5sum: 07d811cd4e350d56267183d2ac7023a5
- path: ./output/bowtie2/bowtie2/genome.4.bt2l
md5sum: c25be5f8b0378abf7a58c8a880b87626
- path: ./output/bowtie2/bowtie2/genome.rev.1.bt2l
md5sum: fda48e35925fb24d1c0785f021981e25
- path: ./output/bowtie2/bowtie2/genome.rev.2.bt2l
md5sum: 802c26d32b970e1b105032b7ce7348b4

View file

@ -7,9 +7,22 @@ include { DIAMOND_BLASTP } from '../../../../modules/diamond/blastp/main.nf'
workflow test_diamond_blastp {
db = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
out_ext = 'txt'
blast_columns = 'qseqid qlen'
DIAMOND_MAKEDB ( db )
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db )
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
}
workflow test_diamond_blastp_daa {
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
out_ext = 'daa'
blast_columns = []
DIAMOND_MAKEDB ( db )
DIAMOND_BLASTP ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
}

View file

@ -1,8 +1,17 @@
- name: diamond blastp
command: nextflow run ./tests/modules/diamond/blastp -entry test_diamond_blastp -c ./tests/config/nextflow.config -c ./tests/modules/diamond/blastp/nextflow.config
- name: diamond blastp test_diamond_blastp
command: nextflow run tests/modules/diamond/blastp -entry test_diamond_blastp -c tests/config/nextflow.config
tags:
- diamond
- diamond/blastp
- diamond
files:
- path: ./output/diamond/test.diamond_blastp.txt
md5sum: 3ca7f6290c1d8741c573370e6f8b4db0
- path: output/diamond/test.diamond_blastp.txt
- path: output/diamond/versions.yml
- name: diamond blastp test_diamond_blastp_daa
command: nextflow run tests/modules/diamond/blastp -entry test_diamond_blastp_daa -c tests/config/nextflow.config
tags:
- diamond/blastp
- diamond
files:
- path: output/diamond/test.diamond_blastp.daa
- path: output/diamond/versions.yml

View file

@ -7,9 +7,22 @@ include { DIAMOND_BLASTX } from '../../../../modules/diamond/blastx/main.nf'
workflow test_diamond_blastx {
db = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
out_ext = 'tfdfdt' // Nonsense file extension to check default case.
blast_columns = 'qseqid qlen'
DIAMOND_MAKEDB ( db )
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db )
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
}
workflow test_diamond_blastx_daa {
db = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
fasta = [ file(params.test_data['sarscov2']['genome']['transcriptome_fasta'], checkIfExists: true) ]
out_ext = 'daa'
blast_columns = []
DIAMOND_MAKEDB ( db )
DIAMOND_BLASTX ( [ [id:'test'], fasta ], DIAMOND_MAKEDB.out.db, out_ext, blast_columns )
}

View file

@ -1,8 +1,18 @@
- name: diamond blastx
command: nextflow run ./tests/modules/diamond/blastx -entry test_diamond_blastx -c ./tests/config/nextflow.config -c ./tests/modules/diamond/blastx/nextflow.config
- name: diamond blastx test_diamond_blastx
command: nextflow run tests/modules/diamond/blastx -entry test_diamond_blastx -c tests/config/nextflow.config
tags:
- diamond
- diamond/blastx
files:
- path: ./output/diamond/test.diamond_blastx.txt
md5sum: d41d8cd98f00b204e9800998ecf8427e
- path: output/diamond/test.diamond_blastx.txt
- path: output/diamond/versions.yml
- name: diamond blastx test_diamond_blastx_daa
command: nextflow run tests/modules/diamond/blastx -entry test_diamond_blastx_daa -c tests/config/nextflow.config
tags:
- diamond
- diamond/blastx
files:
- path: output/diamond/test.diamond_blastx.daa
md5sum: 0df4a833408416f32981415873facc11
- path: output/diamond/versions.yml

View file

@ -6,7 +6,7 @@ include { DIAMOND_MAKEDB } from '../../../../modules/diamond/makedb/main.nf'
workflow test_diamond_makedb {
input = [ file(params.test_data['sarscov2']['genome']['genome_fasta'], checkIfExists: true) ]
input = [ file(params.test_data['sarscov2']['genome']['proteome_fasta'], checkIfExists: true) ]
DIAMOND_MAKEDB ( input )
}

View file

@ -1,8 +1,9 @@
- name: diamond makedb test_diamond_makedb
command: nextflow run ./tests/modules/diamond/makedb -entry test_diamond_makedb -c ./tests/config/nextflow.config -c ./tests/modules/diamond/makedb/nextflow.config
command: nextflow run tests/modules/diamond/makedb -entry test_diamond_makedb -c tests/config/nextflow.config
tags:
- diamond
- diamond/makedb
- diamond
files:
- path: output/diamond/genome.fasta.dmnd
md5sum: 2447fb376394c20d43ea3aad2aa5d15d
- path: output/diamond/proteome.fasta.dmnd
md5sum: fc28c50b202dd7a7c5451cddff2ba1f4
- path: output/diamond/versions.yml

View file

@ -0,0 +1,17 @@
#!/usr/bin/env nextflow
nextflow.enable.dsl = 2
include { ELPREP_SPLIT } from '../../../../modules/elprep/split/main.nf'
include { ELPREP_MERGE } from '../../../../modules/elprep/merge/main.nf'
workflow test_elprep_merge {
input = [
[ id:'test', single_end:false ], // meta map
file(params.test_data['homo_sapiens']['illumina']['test_paired_end_sorted_bam'], checkIfExists: true)
]
ELPREP_SPLIT ( input )
ELPREP_MERGE ( ELPREP_SPLIT.out.bam )
}

View file

@ -0,0 +1,5 @@
process {
withName : ELPREP_MERGE {
publishDir = { "${params.outdir}/${task.process.tokenize(':')[-1].tokenize('_')[0].toLowerCase()}" }
}
}

View file

@ -0,0 +1,8 @@
- name: elprep merge test_elprep_merge
command: nextflow run tests/modules/elprep/merge -entry test_elprep_merge -c tests/config/nextflow.config
tags:
- elprep
- elprep/merge
files:
- path: output/elprep/output/test.bam
- path: output/elprep/versions.yml

View file

@ -10,5 +10,5 @@ workflow test_ensemblvep {
file(params.test_data['sarscov2']['illumina']['test_vcf'], checkIfExists: true)
]
ENSEMBLVEP ( input, "WBcel235", "caenorhabditis_elegans", "104", [] )
ENSEMBLVEP ( input, "WBcel235", "caenorhabditis_elegans", "104", [], [] )
}

View file

@ -14,3 +14,14 @@ workflow test_gatk4_mergebamalignment {
GATK4_MERGEBAMALIGNMENT ( input, fasta, dict )
}
workflow test_gatk4_mergebamalignment_stubs {
input = [ [ id:'test' ], // meta map
"test_foo.bam",
"test_bar.bam"
]
fasta = "genome.fasta"
dict = "genome.fasta.dict"
GATK4_MERGEBAMALIGNMENT ( input, fasta, dict )
}

View file

@ -7,3 +7,12 @@
- path: output/gatk4/test.bam
md5sum: e6f1b343700b7ccb94e81ae127433988
- path: output/gatk4/versions.yml
- name: gatk4 mergebamalignment test_gatk4_mergebamalignment_stubs
command: nextflow run ./tests/modules/gatk4/mergebamalignment -entry test_gatk4_mergebamalignment -c ./tests/config/nextflow.config -c ./tests/modules/gatk4/mergebamalignment/nextflow.config -stub-run
tags:
- gatk4
- gatk4/mergebamalignment
files:
- path: output/gatk4/test.bam
- path: output/gatk4/versions.yml

Some files were not shown because too many files have changed in this diff Show more