mirror of
https://github.com/MillironX/nf-core_modules.git
synced 2024-11-14 05:43:08 +00:00
Merge branch 'master' into bamtools/convert/remove_TODO
This commit is contained in:
commit
a3ae526521
665 changed files with 14101 additions and 2149 deletions
64
.github/ISSUE_TEMPLATE/bug_report.md
vendored
64
.github/ISSUE_TEMPLATE/bug_report.md
vendored
|
@ -1,64 +0,0 @@
|
||||||
---
|
|
||||||
name: Bug report
|
|
||||||
about: Report something that is broken or incorrect
|
|
||||||
title: "[BUG]"
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
# nf-core/module bug report
|
|
||||||
|
|
||||||
Hi there!
|
|
||||||
|
|
||||||
Thanks for telling us about a problem with the modules.
|
|
||||||
Please delete this text and anything that's not relevant from the template below:
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Check Documentation
|
|
||||||
|
|
||||||
I have checked the following places for your error:
|
|
||||||
|
|
||||||
- [ ] [nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)
|
|
||||||
- [ ] [nf-core/module documentation](https://github.com/nf-core/modules/blob/master/README.md)
|
|
||||||
|
|
||||||
## Description of the bug
|
|
||||||
|
|
||||||
<!-- A clear and concise description of what the bug is. -->
|
|
||||||
|
|
||||||
## Steps to reproduce
|
|
||||||
|
|
||||||
Steps to reproduce the behaviour:
|
|
||||||
|
|
||||||
1. Command line: <!-- [e.g. `nextflow run ...`] -->
|
|
||||||
2. See error: <!-- [Please provide your error message] -->
|
|
||||||
|
|
||||||
## Expected behaviour
|
|
||||||
|
|
||||||
<!-- A clear and concise description of what you expected to happen. -->
|
|
||||||
|
|
||||||
## Log files
|
|
||||||
|
|
||||||
Have you provided the following extra information/files:
|
|
||||||
|
|
||||||
- [ ] The command used to run the module
|
|
||||||
- [ ] The `.nextflow.log` file <!-- this is a hidden file in the directory where you launched the module -->
|
|
||||||
|
|
||||||
## System
|
|
||||||
|
|
||||||
- Hardware: <!-- [e.g. HPC, Desktop, Cloud...] -->
|
|
||||||
- Executor: <!-- [e.g. slurm, local, awsbatch...] -->
|
|
||||||
- OS: <!-- [e.g. CentOS Linux, macOS, Linux Mint...] -->
|
|
||||||
- Version <!-- [e.g. 7, 10.13.6, 18.3...] -->
|
|
||||||
|
|
||||||
## Nextflow Installation
|
|
||||||
|
|
||||||
- Version: <!-- [e.g. 19.10.0] -->
|
|
||||||
|
|
||||||
## Container engine
|
|
||||||
|
|
||||||
- Engine: <!-- [e.g. Conda, Docker, Singularity or Podman] -->
|
|
||||||
- version: <!-- [e.g. 1.0.0] -->
|
|
||||||
- Image tag: <!-- [e.g. nfcore/module:2.6] -->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!-- Add any other context about the problem here. -->
|
|
52
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
52
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
|
@ -0,0 +1,52 @@
|
||||||
|
name: Bug report
|
||||||
|
description: Report something that is broken or incorrect
|
||||||
|
labels: bug
|
||||||
|
body:
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Have you checked the docs?
|
||||||
|
description: I have checked the following places for my error
|
||||||
|
options:
|
||||||
|
- label: "[nf-core website: troubleshooting](https://nf-co.re/usage/troubleshooting)"
|
||||||
|
required: true
|
||||||
|
- label: "[nf-core modules documentation](https://nf-co.re/docs/contributing/modules)"
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: description
|
||||||
|
attributes:
|
||||||
|
label: Description of the bug
|
||||||
|
description: A clear and concise description of what the bug is.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: command_used
|
||||||
|
attributes:
|
||||||
|
label: Command used and terminal output
|
||||||
|
description: Steps to reproduce the behaviour. Please paste the command you used to launch the pipeline and the output from your terminal.
|
||||||
|
render: console
|
||||||
|
placeholder: |
|
||||||
|
$ nextflow run ...
|
||||||
|
|
||||||
|
Some output where something broke
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: files
|
||||||
|
attributes:
|
||||||
|
label: Relevant files
|
||||||
|
description: |
|
||||||
|
Please drag and drop the relevant files here. Create a `.zip` archive if the extension is not allowed.
|
||||||
|
Your verbose log file `.nextflow.log` is often useful _(this is a hidden file in the directory where you launched the pipeline)_ as well as custom Nextflow configuration files.
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: system
|
||||||
|
attributes:
|
||||||
|
label: System information
|
||||||
|
description: |
|
||||||
|
* Nextflow version _(eg. 21.10.3)_
|
||||||
|
* Hardware _(eg. HPC, Desktop, Cloud)_
|
||||||
|
* Executor _(eg. slurm, local, awsbatch)_
|
||||||
|
* Container engine and version: _(e.g. Docker 1.0.0, Singularity, Conda, Podman, Shifter or Charliecloud)_
|
||||||
|
* OS and version: _(eg. CentOS Linux, macOS, Ubuntu 22.04)_
|
||||||
|
* Image tag: <!-- [e.g. nfcore/cellranger:2.6] -->
|
32
.github/ISSUE_TEMPLATE/feature_request.md
vendored
32
.github/ISSUE_TEMPLATE/feature_request.md
vendored
|
@ -1,32 +0,0 @@
|
||||||
---
|
|
||||||
name: Feature request
|
|
||||||
about: Suggest an idea for nf-core/modules
|
|
||||||
title: "[FEATURE]"
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
# nf-core/modules feature request
|
|
||||||
|
|
||||||
Hi there!
|
|
||||||
|
|
||||||
Thanks for suggesting a new feature for the modules!
|
|
||||||
Please delete this text and anything that's not relevant from the template below:
|
|
||||||
-->
|
|
||||||
|
|
||||||
## Is your feature request related to a problem? Please describe
|
|
||||||
|
|
||||||
<!-- A clear and concise description of what the problem is. -->
|
|
||||||
|
|
||||||
<!-- e.g. [I'm always frustrated when ...] -->
|
|
||||||
|
|
||||||
## Describe the solution you'd like
|
|
||||||
|
|
||||||
<!-- A clear and concise description of what you want to happen. -->
|
|
||||||
|
|
||||||
## Describe alternatives you've considered
|
|
||||||
|
|
||||||
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
|
|
||||||
|
|
||||||
## Additional context
|
|
||||||
|
|
||||||
<!-- Add any other context about the feature request here. -->
|
|
32
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
32
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
|
@ -0,0 +1,32 @@
|
||||||
|
name: Feature request
|
||||||
|
description: Suggest an idea for nf-core/modules
|
||||||
|
labels: feature
|
||||||
|
title: "[FEATURE]"
|
||||||
|
body:
|
||||||
|
- type: textarea
|
||||||
|
id: description
|
||||||
|
attributes:
|
||||||
|
label: Is your feature request related to a problem? Please describe
|
||||||
|
description: A clear and concise description of what the bug is.
|
||||||
|
placeholder: |
|
||||||
|
<!-- e.g. [I'm always frustrated when ...] -->
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: solution
|
||||||
|
attributes:
|
||||||
|
label: Describe the solution you'd like
|
||||||
|
description: A clear and concise description of the solution you want to happen.
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: alternatives
|
||||||
|
attributes:
|
||||||
|
label: Describe alternatives you've considered
|
||||||
|
description: A clear and concise description of any alternative solutions or features you've considered.
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: additional_context
|
||||||
|
attributes:
|
||||||
|
label: Additional context
|
||||||
|
description: Add any other context about the feature request here.
|
26
.github/ISSUE_TEMPLATE/new_module.md
vendored
26
.github/ISSUE_TEMPLATE/new_module.md
vendored
|
@ -1,26 +0,0 @@
|
||||||
---
|
|
||||||
name: New module
|
|
||||||
about: Suggest a new module for nf-core/modules
|
|
||||||
title: "new module: TOOL/SUBTOOL"
|
|
||||||
label: new module
|
|
||||||
---
|
|
||||||
|
|
||||||
<!--
|
|
||||||
# nf-core/modules new module suggestion
|
|
||||||
|
|
||||||
Hi there!
|
|
||||||
|
|
||||||
Thanks for suggesting a new module for the modules!
|
|
||||||
Please delete this text and anything that's not relevant from the template below:
|
|
||||||
|
|
||||||
Replace TOOL with the bioconda name for the tool in the following text, so that the link is functional.
|
|
||||||
|
|
||||||
Replace TOOL/SUBTOOL in the issue title so that it's understandable.
|
|
||||||
-->
|
|
||||||
|
|
||||||
I think it would be good to have a module for [TOOL](https://bioconda.github.io/recipes/TOOL/README.html)
|
|
||||||
|
|
||||||
- [ ] This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
|
|
||||||
- [ ] There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
|
|
||||||
- [ ] There is no [open issue](https://github.com/nf-core/modules/issues) for this module
|
|
||||||
- [ ] If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
|
|
36
.github/ISSUE_TEMPLATE/new_module.yml
vendored
Normal file
36
.github/ISSUE_TEMPLATE/new_module.yml
vendored
Normal file
|
@ -0,0 +1,36 @@
|
||||||
|
name: New module
|
||||||
|
description: Suggest a new module for nf-core/modules
|
||||||
|
title: "new module: TOOL/SUBTOOL"
|
||||||
|
labels: new module
|
||||||
|
body:
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Is there an existing module for this?
|
||||||
|
description: This module does not exist yet with the [`nf-core modules list`](https://github.com/nf-core/tools#list-modules) command
|
||||||
|
options:
|
||||||
|
- label: I have searched for the existing module
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Is there an open PR for this?
|
||||||
|
description: There is no [open pull request](https://github.com/nf-core/modules/pulls) for this module
|
||||||
|
options:
|
||||||
|
- label: I have searched for existing PRs
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Is there an open issue for this?
|
||||||
|
description: There is no [open issue](https://github.com/nf-core/modules/issues) for this module
|
||||||
|
options:
|
||||||
|
- label: I have searched for existing issues
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
attributes:
|
||||||
|
label: Are you going to work on this?
|
||||||
|
description: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
|
||||||
|
options:
|
||||||
|
- label: If I'm planning to work on this module, I added myself to the `Assignees` to facilitate tracking who is working on the module
|
||||||
|
required: false
|
|
@ -12,14 +12,13 @@ process ADAPTERREMOVAL {
|
||||||
path(adapterlist)
|
path(adapterlist)
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path("${prefix}.truncated.gz") , optional: true, emit: singles_truncated
|
tuple val(meta), path("${prefix}.truncated.fastq.gz") , optional: true, emit: singles_truncated
|
||||||
tuple val(meta), path("${prefix}.discarded.gz") , optional: true, emit: discarded
|
tuple val(meta), path("${prefix}.discarded.fastq.gz") , optional: true, emit: discarded
|
||||||
tuple val(meta), path("${prefix}.pair1.truncated.gz") , optional: true, emit: pair1_truncated
|
tuple val(meta), path("${prefix}.pair{1,2}.truncated.fastq.gz") , optional: true, emit: paired_truncated
|
||||||
tuple val(meta), path("${prefix}.pair2.truncated.gz") , optional: true, emit: pair2_truncated
|
tuple val(meta), path("${prefix}.collapsed.fastq.gz") , optional: true, emit: collapsed
|
||||||
tuple val(meta), path("${prefix}.collapsed.gz") , optional: true, emit: collapsed
|
tuple val(meta), path("${prefix}.collapsed.truncated.fastq.gz") , optional: true, emit: collapsed_truncated
|
||||||
tuple val(meta), path("${prefix}.collapsed.truncated.gz") , optional: true, emit: collapsed_truncated
|
tuple val(meta), path("${prefix}.paired.fastq.gz") , optional: true, emit: paired_interleaved
|
||||||
tuple val(meta), path("${prefix}.paired.gz") , optional: true, emit: paired_interleaved
|
tuple val(meta), path('*.settings') , emit: settings
|
||||||
tuple val(meta), path('*.log') , emit: log
|
|
||||||
path "versions.yml" , emit: versions
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
when:
|
when:
|
||||||
|
@ -38,10 +37,19 @@ process ADAPTERREMOVAL {
|
||||||
$adapterlist \\
|
$adapterlist \\
|
||||||
--basename ${prefix} \\
|
--basename ${prefix} \\
|
||||||
--threads ${task.cpus} \\
|
--threads ${task.cpus} \\
|
||||||
--settings ${prefix}.log \\
|
|
||||||
--seed 42 \\
|
--seed 42 \\
|
||||||
--gzip
|
--gzip
|
||||||
|
|
||||||
|
ensure_fastq() {
|
||||||
|
if [ -f "\${1}" ]; then
|
||||||
|
mv "\${1}" "\${1::-3}.fastq.gz"
|
||||||
|
fi
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
ensure_fastq '${prefix}.truncated.gz'
|
||||||
|
ensure_fastq '${prefix}.discarded.gz'
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
adapterremoval: \$(AdapterRemoval --version 2>&1 | sed -e "s/AdapterRemoval ver. //g")
|
adapterremoval: \$(AdapterRemoval --version 2>&1 | sed -e "s/AdapterRemoval ver. //g")
|
||||||
|
@ -56,10 +64,24 @@ process ADAPTERREMOVAL {
|
||||||
$adapterlist \\
|
$adapterlist \\
|
||||||
--basename ${prefix} \\
|
--basename ${prefix} \\
|
||||||
--threads $task.cpus \\
|
--threads $task.cpus \\
|
||||||
--settings ${prefix}.log \\
|
|
||||||
--seed 42 \\
|
--seed 42 \\
|
||||||
--gzip
|
--gzip
|
||||||
|
|
||||||
|
ensure_fastq() {
|
||||||
|
if [ -f "\${1}" ]; then
|
||||||
|
mv "\${1}" "\${1::-3}.fastq.gz"
|
||||||
|
fi
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
ensure_fastq '${prefix}.truncated.gz'
|
||||||
|
ensure_fastq '${prefix}.discarded.gz'
|
||||||
|
ensure_fastq '${prefix}.pair1.truncated.gz'
|
||||||
|
ensure_fastq '${prefix}.pair2.truncated.gz'
|
||||||
|
ensure_fastq '${prefix}.collapsed.gz'
|
||||||
|
ensure_fastq '${prefix}.collapsed.truncated.gz'
|
||||||
|
ensure_fastq '${prefix}.paired.gz'
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
adapterremoval: \$(AdapterRemoval --version 2>&1 | sed -e "s/AdapterRemoval ver. //g")
|
adapterremoval: \$(AdapterRemoval --version 2>&1 | sed -e "s/AdapterRemoval ver. //g")
|
||||||
|
|
|
@ -43,43 +43,43 @@ output:
|
||||||
Adapter trimmed FastQ files of either single-end reads, or singleton
|
Adapter trimmed FastQ files of either single-end reads, or singleton
|
||||||
'orphaned' reads from merging of paired-end data (i.e., one of the pair
|
'orphaned' reads from merging of paired-end data (i.e., one of the pair
|
||||||
was lost due to filtering thresholds).
|
was lost due to filtering thresholds).
|
||||||
pattern: "*.truncated.gz"
|
pattern: "*.truncated.fastq.gz"
|
||||||
- discarded:
|
- discarded:
|
||||||
type: file
|
type: file
|
||||||
description: |
|
description: |
|
||||||
Adapter trimmed FastQ files of reads that did not pass filtering
|
Adapter trimmed FastQ files of reads that did not pass filtering
|
||||||
thresholds.
|
thresholds.
|
||||||
pattern: "*.discarded.gz"
|
pattern: "*.discarded.fastq.gz"
|
||||||
- pair1_truncated:
|
- pair1_truncated:
|
||||||
type: file
|
type: file
|
||||||
description: |
|
description: |
|
||||||
Adapter trimmed R1 FastQ files of paired-end reads that did not merge
|
Adapter trimmed R1 FastQ files of paired-end reads that did not merge
|
||||||
with their respective R2 pair due to long templates. The respective pair
|
with their respective R2 pair due to long templates. The respective pair
|
||||||
is stored in 'pair2_truncated'.
|
is stored in 'pair2_truncated'.
|
||||||
pattern: "*.pair1.truncated.gz"
|
pattern: "*.pair1.truncated.fastq.gz"
|
||||||
- pair2_truncated:
|
- pair2_truncated:
|
||||||
type: file
|
type: file
|
||||||
description: |
|
description: |
|
||||||
Adapter trimmed R2 FastQ files of paired-end reads that did not merge
|
Adapter trimmed R2 FastQ files of paired-end reads that did not merge
|
||||||
with their respective R1 pair due to long templates. The respective pair
|
with their respective R1 pair due to long templates. The respective pair
|
||||||
is stored in 'pair1_truncated'.
|
is stored in 'pair1_truncated'.
|
||||||
pattern: "*.pair2.truncated.gz"
|
pattern: "*.pair2.truncated.fastq.gz"
|
||||||
- collapsed:
|
- collapsed:
|
||||||
type: file
|
type: file
|
||||||
description: |
|
description: |
|
||||||
Collapsed FastQ of paired-end reads that successfully merged with their
|
Collapsed FastQ of paired-end reads that successfully merged with their
|
||||||
respective R1 pair but were not trimmed.
|
respective R1 pair but were not trimmed.
|
||||||
pattern: "*.collapsed.gz"
|
pattern: "*.collapsed.fastq.gz"
|
||||||
- collapsed_truncated:
|
- collapsed_truncated:
|
||||||
type: file
|
type: file
|
||||||
description: |
|
description: |
|
||||||
Collapsed FastQ of paired-end reads that successfully merged with their
|
Collapsed FastQ of paired-end reads that successfully merged with their
|
||||||
respective R1 pair and were trimmed of adapter due to sufficient overlap.
|
respective R1 pair and were trimmed of adapter due to sufficient overlap.
|
||||||
pattern: "*.collapsed.truncated.gz"
|
pattern: "*.collapsed.truncated.fastq.gz"
|
||||||
- log:
|
- log:
|
||||||
type: file
|
type: file
|
||||||
description: AdapterRemoval log file
|
description: AdapterRemoval log file
|
||||||
pattern: "*.log"
|
pattern: "*.settings"
|
||||||
- versions:
|
- versions:
|
||||||
type: file
|
type: file
|
||||||
description: File containing software versions
|
description: File containing software versions
|
||||||
|
|
41
modules/amplify/predict/main.nf
Normal file
41
modules/amplify/predict/main.nf
Normal file
|
@ -0,0 +1,41 @@
|
||||||
|
def VERSION = '1.0.3' // Version information not provided by tool
|
||||||
|
|
||||||
|
process AMPLIFY_PREDICT {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::amplify=1.0.3" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/amplify:1.0.3--py36hdfd78af_0':
|
||||||
|
'quay.io/biocontainers/amplify:1.0.3--py36hdfd78af_0' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(faa)
|
||||||
|
path(model_dir)
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path('*.tsv'), emit: tsv
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def custom_model_dir = model_dir ? "-md ${model_dir}" : ""
|
||||||
|
"""
|
||||||
|
AMPlify \\
|
||||||
|
$args \\
|
||||||
|
${custom_model_dir} \\
|
||||||
|
-s '${faa}'
|
||||||
|
|
||||||
|
#rename output, because tool includes date and time in name
|
||||||
|
mv *.tsv ${prefix}.tsv
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
AMPlify: $VERSION
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
47
modules/amplify/predict/meta.yml
Normal file
47
modules/amplify/predict/meta.yml
Normal file
|
@ -0,0 +1,47 @@
|
||||||
|
name: "amplify_predict"
|
||||||
|
description: AMPlify is an attentive deep learning model for antimicrobial peptide prediction.
|
||||||
|
keywords:
|
||||||
|
- antimicrobial peptides
|
||||||
|
- AMPs
|
||||||
|
- prediction
|
||||||
|
- model
|
||||||
|
tools:
|
||||||
|
- "amplify":
|
||||||
|
description: "Attentive deep learning model for antimicrobial peptide prediction"
|
||||||
|
homepage: "https://github.com/bcgsc/AMPlify"
|
||||||
|
documentation: "https://github.com/bcgsc/AMPlify"
|
||||||
|
tool_dev_url: "https://github.com/bcgsc/AMPlify"
|
||||||
|
doi: "https://doi.org/10.1186/s12864-022-08310-4"
|
||||||
|
licence: "['GPL v3']"
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- faa:
|
||||||
|
type: file
|
||||||
|
description: amino acid sequences fasta
|
||||||
|
pattern: "*.{fa,fa.gz,faa,faa.gz,fasta,fasta.gz}"
|
||||||
|
- model_dir:
|
||||||
|
type: directory
|
||||||
|
description: Directory of where models are stored (optional)
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- tsv:
|
||||||
|
type: file
|
||||||
|
description: amino acid sequences with prediction (AMP, non-AMP) and probability scores
|
||||||
|
pattern: "*.{tsv}"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@louperelo"
|
68
modules/antismash/antismashlite/main.nf
Normal file
68
modules/antismash/antismashlite/main.nf
Normal file
|
@ -0,0 +1,68 @@
|
||||||
|
process ANTISMASH_ANTISMASHLITE {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_medium'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::antismash-lite=6.0.1" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/antismash-lite:6.0.1--pyhdfd78af_1' :
|
||||||
|
'quay.io/biocontainers/antismash-lite:6.0.1--pyhdfd78af_1' }"
|
||||||
|
|
||||||
|
containerOptions {
|
||||||
|
workflow.containerEngine == 'singularity' ?
|
||||||
|
"-B $antismash_dir:/usr/local/lib/python3.8/site-packages/antismash" :
|
||||||
|
workflow.containerEngine == 'docker' ?
|
||||||
|
"-v \$PWD/$antismash_dir:/usr/local/lib/python3.8/site-packages/antismash" :
|
||||||
|
''
|
||||||
|
}
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(sequence_input)
|
||||||
|
path(databases)
|
||||||
|
path(antismash_dir) // Optional input: AntiSMASH installation folder. It is not needed for using this module with conda, but required for docker/singularity (see meta.yml).
|
||||||
|
path(gff)
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("${prefix}/clusterblast/*_c*.txt") , optional: true, emit: clusterblast_file
|
||||||
|
tuple val(meta), path("${prefix}/{css,images,js}") , emit: html_accessory_files
|
||||||
|
tuple val(meta), path("${prefix}/knownclusterblast/region*/ctg*.html") , optional: true, emit: knownclusterblast_html
|
||||||
|
tuple val(meta), path("${prefix}/knownclusterblast/*_c*.txt") , optional: true, emit: knownclusterblast_txt
|
||||||
|
tuple val(meta), path("${prefix}/svg/clusterblast*.svg") , optional: true, emit: svg_files_clusterblast
|
||||||
|
tuple val(meta), path("${prefix}/svg/knownclusterblast*.svg") , optional: true, emit: svg_files_knownclusterblast
|
||||||
|
tuple val(meta), path("${prefix}/*.gbk") , emit: gbk_input
|
||||||
|
tuple val(meta), path("${prefix}/*.json") , emit: json_results
|
||||||
|
tuple val(meta), path("${prefix}/*.log") , emit: log
|
||||||
|
tuple val(meta), path("${prefix}/*.zip") , emit: zip
|
||||||
|
tuple val(meta), path("${prefix}/*region*.gbk") , emit: gbk_results
|
||||||
|
tuple val(meta), path("${prefix}/clusterblastoutput.txt") , optional: true, emit: clusterblastoutput
|
||||||
|
tuple val(meta), path("${prefix}/index.html") , emit: html
|
||||||
|
tuple val(meta), path("${prefix}/knownclusterblastoutput.txt") , optional: true, emit: knownclusterblastoutput
|
||||||
|
tuple val(meta), path("${prefix}/regions.js") , emit: json_sideloading
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
prefix = task.ext.suffix ? "${meta.id}${task.ext.suffix}" : "${meta.id}"
|
||||||
|
gff_flag = "--genefinding-gff3 ${gff}"
|
||||||
|
|
||||||
|
"""
|
||||||
|
## We specifically do not include annotations (--genefinding-tool none) as
|
||||||
|
## this should be run as a separate module for versioning purposes
|
||||||
|
antismash \\
|
||||||
|
$args \\
|
||||||
|
$gff_flag \\
|
||||||
|
-c $task.cpus \\
|
||||||
|
--output-dir $prefix \\
|
||||||
|
--genefinding-tool none \\
|
||||||
|
--logfile $prefix/${prefix}.log \\
|
||||||
|
--databases $databases \\
|
||||||
|
$sequence_input
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
antismash-lite: \$(antismash --version | sed 's/antiSMASH //')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
128
modules/antismash/antismashlite/meta.yml
Normal file
128
modules/antismash/antismashlite/meta.yml
Normal file
|
@ -0,0 +1,128 @@
|
||||||
|
name: antismash_antismashlite
|
||||||
|
description: |
|
||||||
|
antiSMASH allows the rapid genome-wide identification, annotation
|
||||||
|
and analysis of secondary metabolite biosynthesis gene clusters.
|
||||||
|
keywords:
|
||||||
|
- secondary metabolites
|
||||||
|
- BGC
|
||||||
|
- biosynthetic gene cluster
|
||||||
|
- genome mining
|
||||||
|
- NRPS
|
||||||
|
- RiPP
|
||||||
|
- antibiotics
|
||||||
|
- prokaryotes
|
||||||
|
- bacteria
|
||||||
|
- eukaryotes
|
||||||
|
- fungi
|
||||||
|
- antismash
|
||||||
|
|
||||||
|
tools:
|
||||||
|
- antismashlite:
|
||||||
|
description: "antiSMASH - the antibiotics and Secondary Metabolite Analysis SHell"
|
||||||
|
homepage: "https://docs.antismash.secondarymetabolites.org"
|
||||||
|
documentation: "https://docs.antismash.secondarymetabolites.org"
|
||||||
|
tool_dev_url: "https://github.com/antismash/antismash"
|
||||||
|
doi: "10.1093/nar/gkab335"
|
||||||
|
licence: "['AGPL v3']"
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- sequence_input:
|
||||||
|
type: file
|
||||||
|
description: nucleotide sequence file (annotated)
|
||||||
|
pattern: "*.{gbk, gb, gbff, genbank, embl, fasta, fna}"
|
||||||
|
- databases:
|
||||||
|
type: directory
|
||||||
|
description: downloaded AntiSMASH databases e.g. data/databases
|
||||||
|
pattern: "*/"
|
||||||
|
- antismash_dir:
|
||||||
|
type: directory
|
||||||
|
description: |
|
||||||
|
A local copy of an AntiSMASH installation folder. This is required when running with
|
||||||
|
docker and singularity (not required for conda), due to attempted 'modifications' of
|
||||||
|
files during database checks in the installation directory, something that cannot
|
||||||
|
be done in immutable docker/singularity containers. Therefore, a local installation
|
||||||
|
directory needs to be mounted (including all modified files from the downloading step)
|
||||||
|
to the container as a workaround.
|
||||||
|
pattern: "*/"
|
||||||
|
- gff:
|
||||||
|
type: file
|
||||||
|
pattern: "*.gff"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- clusterblast_file:
|
||||||
|
type: file
|
||||||
|
description: Output of ClusterBlast algorithm
|
||||||
|
pattern: "clusterblast/*_c*.txt"
|
||||||
|
- html_accessory_files:
|
||||||
|
type: directory
|
||||||
|
description: Accessory files for the HTML output
|
||||||
|
pattern: "{css/,images/,js/}"
|
||||||
|
- knownclusterblast_html:
|
||||||
|
type: file
|
||||||
|
description: Tables with MIBiG hits in HTML format
|
||||||
|
pattern: "knownclusterblast/region*/ctg*.html"
|
||||||
|
- knownclusterblast_txt:
|
||||||
|
type: file
|
||||||
|
description: Tables with MIBiG hits
|
||||||
|
pattern: "knownclusterblast/*_c*.txt"
|
||||||
|
- svg_files_clusterblast:
|
||||||
|
type: file
|
||||||
|
description: SVG images showing the % identity of the aligned hits against their queries
|
||||||
|
pattern: "svg/clusterblast*.svg"
|
||||||
|
- svg_files_knownclusterblast:
|
||||||
|
type: file
|
||||||
|
description: SVG images showing the % identity of the aligned hits against their queries
|
||||||
|
pattern: "svg/knownclusterblast*.svg"
|
||||||
|
- gbk_input:
|
||||||
|
type: file
|
||||||
|
description: Nucleotide sequence and annotations in GenBank format; converted from input file
|
||||||
|
pattern: "*.gbk"
|
||||||
|
- json_results:
|
||||||
|
type: file
|
||||||
|
description: Nucleotide sequence and annotations in JSON format; converted from GenBank file (gbk_input)
|
||||||
|
pattern: "*.json"
|
||||||
|
- log:
|
||||||
|
type: file
|
||||||
|
description: Contains all the logging output that antiSMASH produced during its run
|
||||||
|
pattern: "*.log"
|
||||||
|
- zip:
|
||||||
|
type: file
|
||||||
|
description: Contains a compressed version of the output folder in zip format
|
||||||
|
pattern: "*.zip"
|
||||||
|
- gbk_results:
|
||||||
|
type: file
|
||||||
|
description: Nucleotide sequence and annotations in GenBank format; one file per antiSMASH hit
|
||||||
|
pattern: "*region*.gbk"
|
||||||
|
- clusterblastoutput:
|
||||||
|
type: file
|
||||||
|
description: Raw BLAST output of known clusters previously predicted by antiSMASH using the built-in ClusterBlast algorithm
|
||||||
|
pattern: "clusterblastoutput.txt"
|
||||||
|
- html:
|
||||||
|
type: file
|
||||||
|
description: Graphical web view of results in HTML format
|
||||||
|
patterN: "index.html"
|
||||||
|
- knownclusterblastoutput:
|
||||||
|
type: file
|
||||||
|
description: Raw BLAST output of known clusters of the MIBiG database
|
||||||
|
pattern: "knownclusterblastoutput.txt"
|
||||||
|
- json_sideloading:
|
||||||
|
type: file
|
||||||
|
description: Sideloaded annotations of protoclusters and/or subregions (see antiSMASH documentation "Annotation sideloading")
|
||||||
|
pattern: "regions.js"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@jasmezz"
|
56
modules/antismash/antismashlitedownloaddatabases/main.nf
Normal file
56
modules/antismash/antismashlitedownloaddatabases/main.nf
Normal file
|
@ -0,0 +1,56 @@
|
||||||
|
process ANTISMASH_ANTISMASHLITEDOWNLOADDATABASES {
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::antismash-lite=6.0.1" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/antismash-lite:6.0.1--pyhdfd78af_1' :
|
||||||
|
'quay.io/biocontainers/antismash-lite:6.0.1--pyhdfd78af_1' }"
|
||||||
|
|
||||||
|
/*
|
||||||
|
These files are normally downloaded/created by download-antismash-databases itself, and must be retrieved for input by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines. This is solely for use for CI tests of the nf-core/module version of antiSMASH.
|
||||||
|
Reason: Upon execution, the tool checks if certain database files are present within the container and if not, it tries to create them in /usr/local/bin, for which only root user has write permissions. Mounting those database files with this module prevents the tool from trying to create them.
|
||||||
|
These files are also emitted as output channels in this module to enable the antismash-lite module to use them as mount volumes to the docker/singularity containers.
|
||||||
|
*/
|
||||||
|
|
||||||
|
containerOptions {
|
||||||
|
workflow.containerEngine == 'singularity' ?
|
||||||
|
"-B $database_css:/usr/local/lib/python3.8/site-packages/antismash/outputs/html/css,$database_detection:/usr/local/lib/python3.8/site-packages/antismash/detection,$database_modules:/usr/local/lib/python3.8/site-packages/antismash/modules" :
|
||||||
|
workflow.containerEngine == 'docker' ?
|
||||||
|
"-v \$PWD/$database_css:/usr/local/lib/python3.8/site-packages/antismash/outputs/html/css -v \$PWD/$database_detection:/usr/local/lib/python3.8/site-packages/antismash/detection -v \$PWD/$database_modules:/usr/local/lib/python3.8/site-packages/antismash/modules" :
|
||||||
|
''
|
||||||
|
}
|
||||||
|
|
||||||
|
input:
|
||||||
|
path database_css
|
||||||
|
path database_detection
|
||||||
|
path database_modules
|
||||||
|
|
||||||
|
output:
|
||||||
|
path("antismash_db") , emit: database
|
||||||
|
path("antismash_dir"), emit: antismash_dir
|
||||||
|
path "versions.yml", emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
conda = params.enable_conda
|
||||||
|
"""
|
||||||
|
download-antismash-databases \\
|
||||||
|
--database-dir antismash_db \\
|
||||||
|
$args
|
||||||
|
|
||||||
|
if [[ $conda = false ]]; \
|
||||||
|
then \
|
||||||
|
cp -r /usr/local/lib/python3.8/site-packages/antismash antismash_dir; \
|
||||||
|
else \
|
||||||
|
cp -r \$(python -c 'import antismash;print(antismash.__file__.split("/__")[0])') antismash_dir; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
antismash-lite: \$(antismash --version | sed 's/antiSMASH //')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
60
modules/antismash/antismashlitedownloaddatabases/meta.yml
Normal file
60
modules/antismash/antismashlitedownloaddatabases/meta.yml
Normal file
|
@ -0,0 +1,60 @@
|
||||||
|
name: antismash_antismashlitedownloaddatabases
|
||||||
|
description: antiSMASH allows the rapid genome-wide identification, annotation and analysis of secondary metabolite biosynthesis gene clusters. This module downloads the antiSMASH databases.
|
||||||
|
keywords:
|
||||||
|
- secondary metabolites
|
||||||
|
- BGC
|
||||||
|
- biosynthetic gene cluster
|
||||||
|
- genome mining
|
||||||
|
- NRPS
|
||||||
|
- RiPP
|
||||||
|
- antibiotics
|
||||||
|
- prokaryotes
|
||||||
|
- bacteria
|
||||||
|
- eukaryotes
|
||||||
|
- fungi
|
||||||
|
- antismash
|
||||||
|
- database
|
||||||
|
tools:
|
||||||
|
- antismash:
|
||||||
|
description: antiSMASH - the antibiotics and Secondary Metabolite Analysis SHell
|
||||||
|
homepage: https://docs.antismash.secondarymetabolites.org
|
||||||
|
documentation: https://docs.antismash.secondarymetabolites.org
|
||||||
|
tool_dev_url: https://github.com/antismash/antismash
|
||||||
|
doi: "10.1093/nar/gkab335"
|
||||||
|
licence: ["AGPL v3"]
|
||||||
|
|
||||||
|
input:
|
||||||
|
- database_css:
|
||||||
|
type: directory
|
||||||
|
description: |
|
||||||
|
antismash/outputs/html/css folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
|
||||||
|
pattern: "css"
|
||||||
|
- database_detection:
|
||||||
|
type: directory
|
||||||
|
description: |
|
||||||
|
antismash/detection folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
|
||||||
|
pattern: "detection"
|
||||||
|
- database_modules:
|
||||||
|
type: directory
|
||||||
|
description: |
|
||||||
|
antismash/modules folder which is being created during the antiSMASH database downloading step. These files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database in pipelines.
|
||||||
|
pattern: "modules"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
|
||||||
|
- database:
|
||||||
|
type: directory
|
||||||
|
description: Download directory for antiSMASH databases
|
||||||
|
pattern: "antismash_db"
|
||||||
|
- antismash_dir:
|
||||||
|
type: directory
|
||||||
|
description: |
|
||||||
|
antismash installation folder which is being modified during the antiSMASH database downloading step. The modified files are normally downloaded by download-antismash-databases itself, and must be retrieved by the user by manually running the command with conda or a standalone installation of antiSMASH. Therefore we do not recommend using this module for production pipelines, but rather require users to specify their own local copy of the antiSMASH database and installation folder in pipelines.
|
||||||
|
pattern: "antismash_dir"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@jasmezz"
|
|
@ -2,15 +2,20 @@ process ARRIBA {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::arriba=2.1.0" : null)
|
conda (params.enable_conda ? "bioconda::arriba=2.2.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/arriba:2.1.0--h3198e80_1' :
|
'https://depot.galaxyproject.org/singularity/arriba:2.2.1--hecb563c_2' :
|
||||||
'quay.io/biocontainers/arriba:2.1.0--h3198e80_1' }"
|
'quay.io/biocontainers/arriba:2.2.1--hecb563c_2' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
path fasta
|
path fasta
|
||||||
path gtf
|
path gtf
|
||||||
|
path blacklist
|
||||||
|
path known_fusions
|
||||||
|
path structural_variants
|
||||||
|
path tags
|
||||||
|
path protein_domains
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path("*.fusions.tsv") , emit: fusions
|
tuple val(meta), path("*.fusions.tsv") , emit: fusions
|
||||||
|
@ -23,7 +28,12 @@ process ARRIBA {
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
def blacklist = (args.contains('-b')) ? '' : '-f blacklist'
|
def blacklist = blacklist ? "-b $blacklist" : "-f blacklist"
|
||||||
|
def known_fusions = known_fusions ? "-k $known_fusions" : ""
|
||||||
|
def structural_variants = structural_variants ? "-d $structual_variants" : ""
|
||||||
|
def tags = tags ? "-t $tags" : ""
|
||||||
|
def protein_domains = protein_domains ? "-p $protein_domains" : ""
|
||||||
|
|
||||||
"""
|
"""
|
||||||
arriba \\
|
arriba \\
|
||||||
-x $bam \\
|
-x $bam \\
|
||||||
|
@ -32,6 +42,10 @@ process ARRIBA {
|
||||||
-o ${prefix}.fusions.tsv \\
|
-o ${prefix}.fusions.tsv \\
|
||||||
-O ${prefix}.fusions.discarded.tsv \\
|
-O ${prefix}.fusions.discarded.tsv \\
|
||||||
$blacklist \\
|
$blacklist \\
|
||||||
|
$known_fusions \\
|
||||||
|
$structural_variants \\
|
||||||
|
$tags \\
|
||||||
|
$protein_domains \\
|
||||||
$args
|
$args
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
@ -39,4 +53,14 @@ process ARRIBA {
|
||||||
arriba: \$(arriba -h | grep 'Version:' 2>&1 | sed 's/Version:\s//')
|
arriba: \$(arriba -h | grep 'Version:' 2>&1 | sed 's/Version:\s//')
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
echo stub > ${prefix}.fusions.tsv
|
||||||
|
echo stub > ${prefix}.fusions.discarded.tsv
|
||||||
|
|
||||||
|
echo "${task.process}:" > versions.yml
|
||||||
|
echo ' arriba: 2.2.1' >> versions.yml
|
||||||
|
"""
|
||||||
}
|
}
|
||||||
|
|
|
@ -30,6 +30,26 @@ input:
|
||||||
type: file
|
type: file
|
||||||
description: Annotation GTF file
|
description: Annotation GTF file
|
||||||
pattern: "*.{gtf}"
|
pattern: "*.{gtf}"
|
||||||
|
- blacklist:
|
||||||
|
type: file
|
||||||
|
description: Blacklist file
|
||||||
|
pattern: "*.{tsv}"
|
||||||
|
- known_fusions:
|
||||||
|
type: file
|
||||||
|
description: Known fusions file
|
||||||
|
pattern: "*.{tsv}"
|
||||||
|
- structural_variants:
|
||||||
|
type: file
|
||||||
|
description: Structural variants file
|
||||||
|
pattern: "*.{tsv}"
|
||||||
|
- tags:
|
||||||
|
type: file
|
||||||
|
description: Tags file
|
||||||
|
pattern: "*.{tsv}"
|
||||||
|
- protein_domains:
|
||||||
|
type: file
|
||||||
|
description: Protein domains file
|
||||||
|
pattern: "*.{gff3}"
|
||||||
|
|
||||||
output:
|
output:
|
||||||
- meta:
|
- meta:
|
||||||
|
@ -51,4 +71,4 @@ output:
|
||||||
pattern: "*.{fusions.discarded.tsv}"
|
pattern: "*.{fusions.discarded.tsv}"
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@praveenraj2018"
|
- "@praveenraj2018,@rannick"
|
||||||
|
|
|
@ -2,10 +2,10 @@ process BAMTOOLS_SPLIT {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_low'
|
label 'process_low'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::bamtools=2.5.1" : null)
|
conda (params.enable_conda ? "bioconda::bamtools=2.5.2" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/bamtools:2.5.1--h9a82719_9' :
|
'https://depot.galaxyproject.org/singularity/bamtools:2.5.2--hd03093a_0' :
|
||||||
'quay.io/biocontainers/bamtools:2.5.1--h9a82719_9' }"
|
'quay.io/biocontainers/bamtools:2.5.2--hd03093a_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
@ -20,10 +20,14 @@ process BAMTOOLS_SPLIT {
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def input_list = bam.collect{"-in $it"}.join(' ')
|
||||||
"""
|
"""
|
||||||
bamtools \\
|
bamtools \\
|
||||||
|
merge \\
|
||||||
|
$input_list \\
|
||||||
|
| bamtools \\
|
||||||
split \\
|
split \\
|
||||||
-in $bam \\
|
-stub $prefix \\
|
||||||
$args
|
$args
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
|
|
@ -23,7 +23,7 @@ input:
|
||||||
e.g. [ id:'test', single_end:false ]
|
e.g. [ id:'test', single_end:false ]
|
||||||
- bam:
|
- bam:
|
||||||
type: file
|
type: file
|
||||||
description: A BAM file to split
|
description: A list of one or more BAM files to merge and then split
|
||||||
pattern: "*.bam"
|
pattern: "*.bam"
|
||||||
|
|
||||||
output:
|
output:
|
||||||
|
@ -43,3 +43,4 @@ output:
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@sguizard"
|
- "@sguizard"
|
||||||
|
- "@matthdsm"
|
||||||
|
|
|
@ -2,10 +2,10 @@ process BBMAP_ALIGN {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::bbmap=38.92 bioconda::samtools=1.13 pigz=2.6" : null)
|
conda (params.enable_conda ? "bioconda::bbmap=38.92 bioconda::samtools=1.15.1 pigz=2.6" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/mulled-v2-008daec56b7aaf3f162d7866758142b9f889d690:f5f55fc5623bb7b3f725e8d2f86bedacfd879510-0' :
|
'https://depot.galaxyproject.org/singularity/mulled-v2-008daec56b7aaf3f162d7866758142b9f889d690:2fee0e0facec1dfe32a1ee4aa516aef7d0296ebf-0' :
|
||||||
'quay.io/biocontainers/mulled-v2-008daec56b7aaf3f162d7866758142b9f889d690:f5f55fc5623bb7b3f725e8d2f86bedacfd879510-0' }"
|
'quay.io/biocontainers/mulled-v2-008daec56b7aaf3f162d7866758142b9f889d690:2fee0e0facec1dfe32a1ee4aa516aef7d0296ebf-0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(fastq)
|
tuple val(meta), path(fastq)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process BBMAP_PILEUP {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::bbmap=38.92 bioconda::samtools=1.13 pigz=2.6" : null)
|
conda (params.enable_conda ? "bioconda::bbmap=38.92 bioconda::samtools=1.15.1 pigz=2.6" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/mulled-v2-008daec56b7aaf3f162d7866758142b9f889d690:f5f55fc5623bb7b3f725e8d2f86bedacfd879510-0' :
|
'https://depot.galaxyproject.org/singularity/mulled-v2-008daec56b7aaf3f162d7866758142b9f889d690:2fee0e0facec1dfe32a1ee4aa516aef7d0296ebf-0' :
|
||||||
'quay.io/biocontainers/mulled-v2-008daec56b7aaf3f162d7866758142b9f889d690:f5f55fc5623bb7b3f725e8d2f86bedacfd879510-0' }"
|
'quay.io/biocontainers/mulled-v2-008daec56b7aaf3f162d7866758142b9f889d690:2fee0e0facec1dfe32a1ee4aa516aef7d0296ebf-0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
|
61
modules/bcftools/roh/main.nf
Normal file
61
modules/bcftools/roh/main.nf
Normal file
|
@ -0,0 +1,61 @@
|
||||||
|
process BCFTOOLS_ROH {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_medium'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::bcftools=1.15.1" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/bcftools:1.15.1--h0ea216a_0':
|
||||||
|
'quay.io/biocontainers/bcftools:1.15.1--h0ea216a_0' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(vcf), path(tbi)
|
||||||
|
path af_file
|
||||||
|
path genetic_map
|
||||||
|
path regions_file
|
||||||
|
path samples_file
|
||||||
|
path targets_file
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("*.roh"), emit: roh
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def af_read = af_file ? "--AF-file ${af_file}" : ''
|
||||||
|
def gen_map = genetic_map ? "--genetic-map ${genetic_map}" : ''
|
||||||
|
def reg_file = regions_file ? "--regions-file ${regions_file}" : ''
|
||||||
|
def samp_file = samples_file ? "--samples-file ${samples_file}" : ''
|
||||||
|
def targ_file = targets_file ? "--targets-file ${targets_file}" : ''
|
||||||
|
"""
|
||||||
|
bcftools \\
|
||||||
|
roh \\
|
||||||
|
$args \\
|
||||||
|
$af_read \\
|
||||||
|
$gen_map \\
|
||||||
|
$reg_file \\
|
||||||
|
$samp_file \\
|
||||||
|
$targ_file \\
|
||||||
|
-o ${prefix}.roh \\
|
||||||
|
$vcf
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
bcftools: \$(bcftools --version 2>&1 | head -n1 | sed 's/^.*bcftools //; s/ .*\$//')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
touch ${prefix}.roh
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
bcftools: \$(bcftools --version 2>&1 | head -n1 | sed 's/^.*bcftools //; s/ .*\$//')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
55
modules/bcftools/roh/meta.yml
Normal file
55
modules/bcftools/roh/meta.yml
Normal file
|
@ -0,0 +1,55 @@
|
||||||
|
name: "bcftools_roh"
|
||||||
|
description: A program for detecting runs of homo/autozygosity. Only bi-allelic sites are considered.
|
||||||
|
keywords:
|
||||||
|
- roh
|
||||||
|
tools:
|
||||||
|
- "roh":
|
||||||
|
description: "A program for detecting runs of homo/autozygosity. Only bi-allelic sites are considered."
|
||||||
|
homepage: https://www.htslib.org/
|
||||||
|
documentation: http://www.htslib.org/doc/bcftools.html
|
||||||
|
doi: 10.1093/bioinformatics/btp352
|
||||||
|
licence: ["MIT"]
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- vcf:
|
||||||
|
type: file
|
||||||
|
description: VCF file
|
||||||
|
pattern: "*.{vcf,.vcf.gz}"
|
||||||
|
- af_file:
|
||||||
|
type: file
|
||||||
|
description: "Read allele frequencies from a tab-delimited file containing the columns: CHROM\tPOS\tREF,ALT\tAF."
|
||||||
|
- genetic_map:
|
||||||
|
type: file
|
||||||
|
description: "Genetic map in the format required also by IMPUTE2."
|
||||||
|
- regions_file:
|
||||||
|
type: file
|
||||||
|
description: "Regions can be specified either on command line or in a VCF, BED, or tab-delimited file (the default)."
|
||||||
|
- samples_file:
|
||||||
|
type: file
|
||||||
|
description: "File of sample names to include or exclude if prefixed with '^'."
|
||||||
|
- targets_file:
|
||||||
|
type: file
|
||||||
|
description: "Targets can be specified either on command line or in a VCF, BED, or tab-delimited file (the default)."
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- roh:
|
||||||
|
type: file
|
||||||
|
description: Contains site-specific and/or per-region runs of homo/autozygosity calls.
|
||||||
|
pattern: "*.{roh}"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@ramprasadn"
|
2
modules/bclconvert/.gitignore
vendored
Normal file
2
modules/bclconvert/.gitignore
vendored
Normal file
|
@ -0,0 +1,2 @@
|
||||||
|
bcl-convert
|
||||||
|
*.rpm
|
15
modules/bclconvert/Dockerfile
Normal file
15
modules/bclconvert/Dockerfile
Normal file
|
@ -0,0 +1,15 @@
|
||||||
|
# Dockerfile to create container with bcl-convert
|
||||||
|
# Push to nfcore/bclconvert:<VER>
|
||||||
|
|
||||||
|
FROM debian:bullseye-slim
|
||||||
|
LABEL authors="Matthias De Smet <matthias.desmet@ugent.be>" \
|
||||||
|
description="Docker image containing bcl-convert"
|
||||||
|
# Disclaimer: this container is not provided nor supported by Illumina
|
||||||
|
# 'ps' command is need by some nextflow executions to collect system stats
|
||||||
|
# Install procps and clean apt cache
|
||||||
|
RUN apt-get update \
|
||||||
|
&& apt-get install -y \
|
||||||
|
procps \
|
||||||
|
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
|
||||||
|
COPY bcl-convert /usr/local/bin/bcl-convert
|
||||||
|
RUN chmod +x /usr/local/bin/bcl-convert
|
30
modules/bclconvert/LICENSE
Normal file
30
modules/bclconvert/LICENSE
Normal file
|
@ -0,0 +1,30 @@
|
||||||
|
ILLUMINA END-USER SOFTWARE LICENSE AGREEMENT
|
||||||
|
|
||||||
|
IMPORTANT-READ CAREFULLY. THIS IS A LICENSE AGREEMENT THAT YOU ARE REQUIRED TO ACCEPT BEFORE, DOWNLOADING, INSTALLING AND USING ANY SOFTWARE MADE AVAILABLE FROM THE ILLUMINA SUPPORT CENTER (https://support.illumina.com).
|
||||||
|
|
||||||
|
CAREFULLY READ ALL THE TERMS AND CONDITIONS OF THIS LICENSE AGREEMENT BEFORE PROCEEDING WITH DOWNLOADING, INSTALLING, AND/OR USING THE SOFTWARE. YOU ARE NOT PERMITTED TO DOWNLOAD, INSTALL, AND/OR USE THE SOFTWARE UNTIL YOU HAVE AGREED TO BE BOUND BY ALL OF THE TERMS AND CONDITIONS OF THIS LICENSE AGREEMENT. YOU REPRESENT AND WARRANT THAT YOU ARE DULY AUTHORIZED TO ACCEPT THE TERMS AND CONDITIONS OF THIS LICENSE AGREEMENT ON BEHALF OF YOUR EMPLOYER.
|
||||||
|
|
||||||
|
Software made available through the Illumina Support Center is licensed, not sold, to you. Your license to each software program made available through the Illumina Support Center is subject to your prior acceptance of either this Illumina End-User Software License Agreement (“Agreement”), or a custom end user license agreement (“Custom EULA”), if one is provided with the software. Any software that is subject to this Agreement is referred to herein as the “Software.” By accepting this Agreement, you agree the terms and conditions of this Agreement will apply to and govern any and all of your downloads, installations, and uses of each Illumina software program made available through the Illumina Support Center, except that your download, installation, and use of any software provided with a Custom EULA will be governed by the terms and conditions of the Custom EULA.
|
||||||
|
|
||||||
|
This Agreement is made and entered into by and between Illumina, Inc., a Delaware corporation, having offices at 5200 Illumina Way, San Diego, CA 92122 (“Illumina”) and you as the end-user of the Software (hereinafter, “Licensee” or “you”). All software, firmware, and associated media, printed materials, and online and electronic documentation, including any updates or upgrades thereof, made available through the Illumina Support Center (collectively, “Software”) provided to Licensee are for use solely by Licensee and the provisions herein WILL apply with respect to such Software.
|
||||||
|
|
||||||
|
License Grant. Subject to the terms and conditions of this Agreement, Illumina grants to Licensee, under the following terms and conditions, a personal, non-exclusive, revocable, non-transferable, non-sublicensable license, for its internal end-use purposes only, in the ordinary course of Licensee’s business to use the Software in executable object code form only, solely at the Licensee’s facility to, install and use the Software on a single computer accessible only by Licensee (and not on any public network or server), where the single computer is owned, leased, or otherwise substantially controlled by Licensee, for the purpose of processing and analyzing data generated from an Illumina genetic sequencing instrument owned and operated solely by Licensee (the “Product”). In the case of Software provided by Illumina in non-compiled form, Illumina grants Licensee a personal, non-exclusive, non-sublicenseable, restricted right to compile, install, and use one copy of the Software solely for processing and analyzing data generated from the Product.
|
||||||
|
License Restrictions. Except as expressly permitted in Section 1, Licensee may not make, have made, import, use, copy, reproduce, distribute, display, publish, sell, re-sell, lease, or sub-license the Software, in whole or in part, except as expressly provided for in this Agreement. Licensee may not modify, improve, translate, reverse engineer, decompile, disassemble, or create derivative works of the Software or otherwise attempt to (a) defeat, avoid, by-pass, remove, deactivate, or otherwise circumvent any software protection mechanisms in the Software including, without limitation, any such mechanism used to restrict or control the functionality of the Software, or (b) derive the source code or the underlying ideas, algorithms, structure, or organization form of the Software. Licensee will not allow, at any time, including during and after the term of the license, the Software or any portions or copies thereof in any form to become available to any third parties. Licensee may use the Software solely with genomic data that is generated using the Product; Licensee may not use the Software with any data generated from other products or instruments. Licensee may not use the Software to perform any data analysis services for any third party.
|
||||||
|
Ownership. The Software is protected by United States and international intellectual property laws. All right, title, and interest in and to the Software (including associated intellectual property rights) are and will remain vested in Illumina or Illumina’s affiliated companies or licensors. Licensee acknowledges that no rights, license or interest to any Illumina trademarks are granted hereunder. Licensee acknowledges that unauthorized reproduction or distribution of the Software, or any portion of it, may result in severe civil and criminal penalties. Illumina reserves all rights in and to the Software not expressly granted to Licensee under this Agreement.
|
||||||
|
Upgrades/Updates. Illumina may, at its sole discretion, provide updates or upgrades to the Software. In that case, Licensee WILL have the same rights and obligations under such updates or upgrades as it has for the versions of the Software initially provided to Licensee hereunder. Licensee recognizes that Illumina is not obligated to provide any upgrades or updates to, or support for, the Software.
|
||||||
|
Data Integrity/Loss. Licensee is responsible for the integrity and availability, including preventing the loss of data that Licensee generates, uses, analyzes, manages, or stores in connection with or through its use of the Software, including without limitation, investigating and implementing industry appropriate policies and procedures regarding the provision of access to Licensee’s data, monitoring access and use of Licensee’s data, conducting routine backups and archiving of Licensee’s data, and ensuring the adequacy of anti-virus software. Accordingly, Licensee agrees that Illumina is not responsible for any inability to access, loss or corruption of data as a result of Licensee’s use of the Software, and Illumina has no liability to Licensee in connection with such inability to access, loss or corruption of data.
|
||||||
|
Term of License. This Agreement will be in effect from the time Licensee expressly accepts the terms and conditions of this license, or otherwise installs the Software, thereby accepting the terms and conditions contained herein, and will remain in effect until terminated. This license will otherwise terminate upon the conditions set forth in this Agreement, if revoked by Illumina, or if Licensee fails to comply with any term or condition of this Agreement including failure to pay any applicable license fee. Licensee agrees upon termination of this Agreement for any reason to immediately discontinue use of and un-install the Software and destroy all copies of the Software in its possession and/or under its control, and return or destroy, at Illumina’s option, any compact disks, floppy disks or other media provided by Illumina storing the Software thereon (together with any authorized copies thereof), as well as any documentation associated therewith
|
||||||
|
Limited Warranty. Illumina warrants that, for a period of 6 months from the date of download or installation of the Software by Licensee, the Software will perform in all material respects in accordance with the accompanying documentation available on the Illumina Support Center. EXCEPT AND TO THE EXTENT EXPRESSLY PROVIDED IN THE FOREGOING, AND TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, THE SOFTWARE IS PROVIDED “AS IS” AND ILLUMINA EXPRESSLY DISCLAIMS ALL WARRANTIES AND CONDITIONS REGARDING THE SOFTWARE AND RESULTS GENERATED BY THE SOFTWARE, INCLUDING WITHOUT LIMITATION, TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, ALL OTHER EXPRESS OR IMPLIED WARRANTIES OR CONDITIONS OF MERCHANTABLE QUALITY, NON-INFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE, AND THOSE ARISING BY STATUTE OR OTHERWISE IN LAW OR FROM A COURSE OF DEALING OR USAGE OF TRADE. ILLUMINA DOES NOT WARRANT THAT THE FUNCTIONS CONTAINED IN THE SOFTWARE WILL MEET LICENSEE"S REQUIREMENTS, OR THAT THE OPERATION OF THE SOFTWARE WILL BE ERROR FREE OR UNINTERRUPTED.
|
||||||
|
Limitation of Liability.
|
||||||
|
(a) ILLUMINA’S ENTIRE LIABILITY AND LICENSEE"S EXCLUSIVE REMEDY UNDER THE LIMITED WARRANTY PROVISION OF SECTION 7 ABOVE WILL BE, AT ILLUMINA’S OPTION, EITHER (i) RETURN OF THE PRICE PAID FOR THE SOFTWARE, OR (ii) REPAIR OR REPLACEMENT OF THE PORTIONS OF THE SOFTWARE THAT DO NOT COMPLY WITH ILLUMINA’S LIMITED WARRANTY. THIS LIMITED WARRANTY IS VOID AND ILLUMINA WILL HAVE NO LIABILITY AT ALL IF FAILURE OF THE SOFTWARE TO COMPLY WITH ILLUMINA LIMITED WARRANTY HAS RESULTED FROM: (w) FAILURE TO USE THE SOFTWARE IN ACCORDANCE WITH ILLUMINA’S THEN CURRENT USER MANUAL OR THIS AGREEMENT; (x) ACCIDENT, ABUSE, OR MISAPPLICATION; (y) PRODUCTS OR EQUIPMENT NOT SPECIFIED BY ILLUMINA AS BEING COMPATIBLE WITH THE SOFTWARE; OR (z) IF LICENSEE HAS NOT NOTIFIED ILLUMINA IN WRITING OF THE DEFECT WITHIN THE ABOVE WARRANTY PERIOD.
|
||||||
|
|
||||||
|
(b) TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL ILLUMINA BE LIABLE UNDER ANY THEORY OF CONTRACT, TORT, STRICT LIABILITY OR OTHER LEGAL OR EQUITABLE THEORY FOR ANY PERSONAL INJURY OR ANY INDIRECT, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, EVEN IF ILLUMINA HAS BEEN ADVISED OF THE POSSIBILITY THEREOF INCLUDING, WITHOUT LIMITATION, LOST PROFITS, LOST DATA, INTERRUPTION OF BUSINESS, LOST BUSINESS REVENUE, OTHER ECONOMIC LOSS, OR ANY LOSS OF RECORDED DATA ARISING OUT OF THE USE OF OR INABILITY TO USE THE SOFTWARE. EXCEPT AND TO THE EXTENT EXPRESSLY PROVIDED IN SECTION 7 AND 8(a) ABOVE OR AS OTHERWISE PERMITTED BY LAW, IN NO EVENT WILL ILLUMINA’S TOTAL LIABILITY TO LICENSEE FOR ALL DAMAGES (OTHER THAN AS MAY BE REQUIRED BY APPLICABLE LAW IN CASES INVOLVING PERSONAL INJURY) EXCEED THE AMOUNT OF $500 USD. THE FOREGOING LIMITATIONS WILL APPLY EVEN IF THE ABOVE STATED REMEDY FAILS OF ITS ESSENTIAL PURPOSE.
|
||||||
|
|
||||||
|
Survival. The limitations of liability and ownership rights of Illumina contained herein and Licensee’s obligations following termination of this Agreement WILL survive the termination of this Agreement for any reason.
|
||||||
|
Research Use Only. The Software is labeled with a For Research Use Only or similar labeling statement and the performance characteristics of the Software have not been established and the Software is not for use in diagnostic procedures. Licensee acknowledges and agrees that (i) the Software has not been approved, cleared, or licensed by the United States Food and Drug Administration or any other regulatory entity whether foreign or domestic for any specific intended use, whether research, commercial, diagnostic, or otherwise, and (ii) Licensee must ensure it has any regulatory approvals that are necessary for Licensee’s intended uses of the Software. Licensee will comply with all applicable laws and regulations when using and maintaining the Software.
|
||||||
|
General. Licensee may not sublicense, assign, share, pledge, rent or transfer any of its rights under this Agreement in relation to the Software or any portion thereof including documentation. Illumina reserves the right to change this Agreement at any time. When Illumina makes any changes, Illumina will provide the updated Agreement, or a link to it, on Illumina’s website (www.illumina.com) and such updated Agreement WILL become effective immediately. Licensee’s continued access to or use of the Software represents Licensee’s agreement to any revised Agreement. If one or more provisions of this Agreement are found to be invalid or unenforceable, this Agreement WILL not be rendered inoperative but the remaining provisions WILL continue in full force and effect. This Agreement constitutes the entire agreement between the parties with respect to the subject matter of this Agreement and merges all prior communications except that a “hard-copy” form of licensing agreement relating to the Software previously agreed to in writing by Illumina and Licensee WILL supersede and govern in the event of any conflicting provisions.
|
||||||
|
Governing Law. This Agreement WILL be governed by and construed in accordance with the laws of the state of California, USA, without regard to its conflicts of laws principles, and independent of where a suit or action hereunder may be filed.
|
||||||
|
U.S. Government End Users. If Licensee is a branch agency or instrumentality of the United States Government, the following provision applies. The Software is a “commercial item” as that term is defined at 48 C.F.R. 2.101, consisting of “commercial computer software” and “commercial computer software documentation,” as such terms are used in 48 C.F.R. 12.212 or 48 C.F.R. 227.7202 (as applicable). Consistent with 48 C.F.R. 12.212 and 48 C.F.R. 227.7202-1 through 227.7202-4, all United States Government end users acquire the Software with only those rights set forth herein.
|
||||||
|
Contact. Any questions regarding legal rights, duties, obligations, or restrictions associated with the software hereunder should be directed to Illumina, Inc., 5200 Illumina Way, San Diego, CA 92122, Attention: Legal Department, Phone: (858) 202-4500, Fax: (858) 202-4599, web site: www.illumina.com <http://www.illumina.com>.
|
||||||
|
Third Party Components. The Software may include third party software (“Third Party Programs”). Some of the Third Party Programs are available under open source or free software licenses. The License Agreement accompanying the Licensed Software does not alter any rights or obligations Licensee may have under those open source or free software licenses. The licenses that govern the terms and conditions of use of the Third Party Programs included in the Licensed Software are provided in the READ ME provided with the Software. The READ ME also contains copyright statements for the various open source software components (or portions thereof) that are distributed with the Licensed Software.
|
||||||
|
END OF END-USER SOFTWARE LICENSE AGREEMENT.
|
17
modules/bclconvert/README.md
Normal file
17
modules/bclconvert/README.md
Normal file
|
@ -0,0 +1,17 @@
|
||||||
|
# Updating the docker container and making a new module release
|
||||||
|
|
||||||
|
bcl-convert is a commercial tool from Illumina. The container provided for the bcl-convert nf-core module is not provided nor supported by Illumina. Updating the bcl-convert versions in the container and pushing the update to Dockerhub needs to be done manually.
|
||||||
|
|
||||||
|
1. Navigate to the appropriate download page. - [BCL Convert](https://support.illumina.com/sequencing/sequencing_software/bcl-convert/downloads.html): download the rpm of the desired bcl-convert version with `curl` or `wget`.
|
||||||
|
2. Unpack the RPM package using `rpm2cpio bcl-convert-*.rpm | cpio -i --make-directories`. Place the executable located in `<unpack_dir>/usr/bin/bcl-convert` in the same folder where the Dockerfile lies.
|
||||||
|
3. Create and test the container:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker build . -t nfcore/bclconvert:<VERSION>
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Access rights are needed to push the container to the Dockerhub nfcore organization, please ask a core team member to do so.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker push nfcore/bclconvert:<VERSION>
|
||||||
|
```
|
81
modules/bclconvert/main.nf
Normal file
81
modules/bclconvert/main.nf
Normal file
|
@ -0,0 +1,81 @@
|
||||||
|
process BCLCONVERT {
|
||||||
|
tag '$samplesheet'
|
||||||
|
label 'process_high'
|
||||||
|
|
||||||
|
if (params.enable_conda) {
|
||||||
|
exit 1, "Conda environments cannot be used when using bcl-convert. Please use docker or singularity containers."
|
||||||
|
}
|
||||||
|
container "nfcore/bclconvert:3.9.3"
|
||||||
|
|
||||||
|
input:
|
||||||
|
path samplesheet
|
||||||
|
path run_dir
|
||||||
|
|
||||||
|
output:
|
||||||
|
path "*.fastq.gz" ,emit: fastq
|
||||||
|
path "Reports/*.{csv,xml,bin}" ,emit: reports
|
||||||
|
path "Logs/*.{log,txt}" ,emit: logs
|
||||||
|
path "InterOp/*.bin" ,emit: interop
|
||||||
|
path "versions.yml" ,emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
|
||||||
|
"""
|
||||||
|
bcl-convert \
|
||||||
|
$args \\
|
||||||
|
--output-directory . \\
|
||||||
|
--bcl-input-directory ${run_dir} \\
|
||||||
|
--sample-sheet ${samplesheet} \\
|
||||||
|
--bcl-num-parallel-tiles ${task.cpus}
|
||||||
|
|
||||||
|
mkdir InterOp
|
||||||
|
cp ${run_dir}/InterOp/*.bin InterOp/
|
||||||
|
mv Reports/*.bin InterOp/
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
bclconvert: \$(bcl-convert -V 2>&1 | head -n 1 | sed 's/^.*Version //')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
"""
|
||||||
|
echo "sample1_S1_L001_R1_001" > sample1_S1_L001_R1_001.fastq.gz
|
||||||
|
echo "sample1_S1_L001_R2_001" > sample1_S1_L001_R2_001.fastq.gz
|
||||||
|
echo "sample1_S1_L002_R1_001" > sample1_S1_L002_R1_001.fastq.gz
|
||||||
|
echo "sample1_S1_L002_R2_001" > sample1_S1_L002_R2_001.fastq.gz
|
||||||
|
echo "sample2_S2_L001_R1_001" > sample2_S2_L001_R1_001.fastq.gz
|
||||||
|
echo "sample2_S2_L001_R2_001" > sample2_S2_L001_R2_001.fastq.gz
|
||||||
|
echo "sample2_S2_L002_R1_001" > sample2_S2_L002_R1_001.fastq.gz
|
||||||
|
echo "sample2_S2_L002_R2_001" > sample2_S2_L002_R2_001.fastq.gz
|
||||||
|
|
||||||
|
mkdir Reports
|
||||||
|
echo "Adapter_Metrics" > Reports/Adapter_Metrics.csv
|
||||||
|
echo "Demultiplex_Stats" > Reports/Demultiplex_Stats.csv
|
||||||
|
echo "fastq_list" > Reports/fastq_list.csv
|
||||||
|
echo "Index_Hopping_Counts" > Reports/Index_Hopping_Counts.csv
|
||||||
|
echo "IndexMetricsOut" > Reports/IndexMetricsOut.bin
|
||||||
|
echo "Quality_Metrics" > Reports/Quality_Metrics.csv
|
||||||
|
echo "RunInfo" > Reports/RunInfo.xml
|
||||||
|
echo "SampleSheet" > Reports/SampleSheet.csv
|
||||||
|
echo "Top_Unknown_Barcodes" > Reports/Top_Unknown_Barcodes.csv
|
||||||
|
|
||||||
|
mkdir Logs
|
||||||
|
echo "Errors" > Logs/Errors.log
|
||||||
|
echo "FastqComplete" > Logs/FastqComplete.txt
|
||||||
|
echo "Info" > Logs/Info.log
|
||||||
|
echo "Warnings" > Logs/Warnings.log
|
||||||
|
|
||||||
|
mkdir InterOp/
|
||||||
|
echo "InterOp" > InterOp/InterOp.bin
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
bclconvert: \$(bcl-convert -V 2>&1 | head -n 1 | sed 's/^.*Version //')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
45
modules/bclconvert/meta.yml
Normal file
45
modules/bclconvert/meta.yml
Normal file
|
@ -0,0 +1,45 @@
|
||||||
|
name: "bclconvert"
|
||||||
|
description: Demultiplex Illumina BCL files
|
||||||
|
keywords:
|
||||||
|
- demultiplex
|
||||||
|
- illumina
|
||||||
|
- fastq
|
||||||
|
tools:
|
||||||
|
- "bclconvert":
|
||||||
|
description: "Demultiplex Illumina BCL files"
|
||||||
|
homepage: "https://support.illumina.com/sequencing/sequencing_software/bcl-convert.html"
|
||||||
|
documentation: "https://support-docs.illumina.com/SW/BCL_Convert/Content/SW/FrontPages/BCL_Convert.htm"
|
||||||
|
licence: "ILLUMINA"
|
||||||
|
|
||||||
|
input:
|
||||||
|
- samplesheet:
|
||||||
|
type: file
|
||||||
|
description: "Input samplesheet"
|
||||||
|
pattern: "*.{csv}"
|
||||||
|
- run_dir:
|
||||||
|
type: directory
|
||||||
|
description: "Input run directory containing RunInfo.xml and BCL data"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- fastq:
|
||||||
|
type: file
|
||||||
|
description: Demultiplexed FASTQ files
|
||||||
|
pattern: "*.{fastq.gz}"
|
||||||
|
- reports:
|
||||||
|
type: file
|
||||||
|
description: Demultiplexing Reports
|
||||||
|
pattern: "Reports/*.{csv,xml}"
|
||||||
|
- logs:
|
||||||
|
type: file
|
||||||
|
description: Log files
|
||||||
|
pattern: "Logs/*.{log,txt}"
|
||||||
|
- interop:
|
||||||
|
type: file
|
||||||
|
description: Interop files
|
||||||
|
pattern: "Interop/*.{bin}"
|
||||||
|
authors:
|
||||||
|
- "@matthdsm"
|
38
modules/bedtools/split/main.nf
Normal file
38
modules/bedtools/split/main.nf
Normal file
|
@ -0,0 +1,38 @@
|
||||||
|
process BEDTOOLS_SPLIT {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::bedtools=2.30.0" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/bedtools:2.30.0--h468198e_3':
|
||||||
|
'quay.io/biocontainers/bedtools:2.30.0--h7d7f7ad_2' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(bed)
|
||||||
|
val(number_of_files)
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("*.bed"), emit: beds
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
|
||||||
|
"""
|
||||||
|
bedtools \\
|
||||||
|
split \\
|
||||||
|
$args \\
|
||||||
|
-i $bed \\
|
||||||
|
-p $prefix \\
|
||||||
|
-n $number_of_files
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
bedtools: \$(bedtools --version | sed -e "s/bedtools v//g")
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
41
modules/bedtools/split/meta.yml
Normal file
41
modules/bedtools/split/meta.yml
Normal file
|
@ -0,0 +1,41 @@
|
||||||
|
name: "bedtools_split"
|
||||||
|
description: Split BED files into several smaller BED files
|
||||||
|
keywords:
|
||||||
|
- sort
|
||||||
|
tools:
|
||||||
|
- "bedtools":
|
||||||
|
description: "A powerful toolset for genome arithmetic"
|
||||||
|
documentation: "https://bedtools.readthedocs.io/en/latest/content/tools/sort.html"
|
||||||
|
licence: "['MIT', 'GPL v2']"
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bed:
|
||||||
|
type: file
|
||||||
|
description: BED file
|
||||||
|
pattern: "*.bed"
|
||||||
|
- bed:
|
||||||
|
type: value
|
||||||
|
description: The number of files to split the BED into
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- beds:
|
||||||
|
type: file
|
||||||
|
description: list of split BED files
|
||||||
|
pattern: "*.bed"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@nvnieuwk"
|
|
@ -2,10 +2,8 @@ process BIOBAMBAM_BAMMARKDUPLICATES2 {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::biobambam=2.0.182" : null)
|
conda (params.enable_conda ? "bioconda::biobambam=2.0.183" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? 'https://depot.galaxyproject.org/singularity/biobambam:2.0.183--h9f5acd7_1' : 'quay.io/biocontainers/biobambam:2.0.183--h9f5acd7_1'}"
|
||||||
'https://depot.galaxyproject.org/singularity/biobambam:2.0.182--h7d875b9_0':
|
|
||||||
'quay.io/biocontainers/biobambam:2.0.182--h7d875b9_0' }"
|
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bam)
|
tuple val(meta), path(bam)
|
||||||
|
|
38
modules/biobambam/bammerge/main.nf
Normal file
38
modules/biobambam/bammerge/main.nf
Normal file
|
@ -0,0 +1,38 @@
|
||||||
|
process BIOBAMBAM_BAMMERGE {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::biobambam=2.0.183" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/biobambam:2.0.183--h9f5acd7_1':
|
||||||
|
'quay.io/biocontainers/biobambam:2.0.183--h9f5acd7_1' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(bam)
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("${prefix}.bam") ,emit: bam
|
||||||
|
tuple val(meta), path("*.bai") ,optional:true, emit: bam_index
|
||||||
|
tuple val(meta), path("*.md5") ,optional:true, emit: checksum
|
||||||
|
path "versions.yml" ,emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def input_string = bam.join(" I=")
|
||||||
|
|
||||||
|
"""
|
||||||
|
bammerge \\
|
||||||
|
I=${input_string} \\
|
||||||
|
$args \\
|
||||||
|
> ${prefix}.bam
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
bammerge: \$( bammerge --version |& sed '1!d; s/.*version //; s/.\$//' )
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
46
modules/biobambam/bammerge/meta.yml
Normal file
46
modules/biobambam/bammerge/meta.yml
Normal file
|
@ -0,0 +1,46 @@
|
||||||
|
name: biobambam_bammerge
|
||||||
|
description: Merge a list of sorted bam files
|
||||||
|
keywords:
|
||||||
|
- merge
|
||||||
|
- bam
|
||||||
|
tools:
|
||||||
|
- biobambam:
|
||||||
|
description: |
|
||||||
|
biobambam is a set of tools for early stage alignment file processing.
|
||||||
|
homepage: https://gitlab.com/german.tischler/biobambam2
|
||||||
|
documentation: https://gitlab.com/german.tischler/biobambam2/-/blob/master/README.md
|
||||||
|
doi: 10.1186/1751-0473-9-13
|
||||||
|
licence: ["GPL v3"]
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: List containing 1 or more bam files
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: Merged BAM file
|
||||||
|
pattern: "*.bam"
|
||||||
|
- bam_index:
|
||||||
|
type: file
|
||||||
|
description: BAM index file
|
||||||
|
pattern: "*"
|
||||||
|
- checksum:
|
||||||
|
type: file
|
||||||
|
description: Checksum file
|
||||||
|
pattern: "*"
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
authors:
|
||||||
|
- "@matthdsm"
|
46
modules/biobambam/bamsormadup/main.nf
Normal file
46
modules/biobambam/bamsormadup/main.nf
Normal file
|
@ -0,0 +1,46 @@
|
||||||
|
process BIOBAMBAM_BAMSORMADUP {
|
||||||
|
tag "$meta.id"
|
||||||
|
label "process_medium"
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::biobambam=2.0.183" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? 'https://depot.galaxyproject.org/singularity/biobambam:2.0.183--h9f5acd7_1' : 'quay.io/biocontainers/biobambam:2.0.183--h9f5acd7_1'}"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(bams)
|
||||||
|
path(fasta)
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("*.{bam,cram}") ,emit: bam
|
||||||
|
tuple val(meta), path("*.bam.bai") ,optional:true, emit: bam_index
|
||||||
|
tuple val(meta), path("*.metrics.txt") ,emit: metrics
|
||||||
|
path "versions.yml" ,emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def suffix = args.contains("outputformat=cram") ? "cram" : "bam"
|
||||||
|
def input_string = bams.join(" I=")
|
||||||
|
|
||||||
|
if (args.contains("outputformat=cram") && reference == null) error "Reference required for CRAM output."
|
||||||
|
|
||||||
|
"""
|
||||||
|
bamcat \\
|
||||||
|
I=${input_string} \\
|
||||||
|
level=0 \\
|
||||||
|
| bamsormadup \\
|
||||||
|
$args \\
|
||||||
|
M=${prefix}.metrics.txt \\
|
||||||
|
tmpfile=$prefix \\
|
||||||
|
threads=$task.cpus \\
|
||||||
|
> ${prefix}.${suffix}
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
bamcat: \$(echo \$(bamsormadup --version 2>&1) | sed 's/^This is biobambam2 version //; s/..biobambam2 is .*\$//' )
|
||||||
|
bamsormadup: \$(echo \$(bamsormadup --version 2>&1) | sed 's/^This is biobambam2 version //; s/..biobambam2 is .*\$//' )
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
52
modules/biobambam/bamsormadup/meta.yml
Normal file
52
modules/biobambam/bamsormadup/meta.yml
Normal file
|
@ -0,0 +1,52 @@
|
||||||
|
name: biobambam_bamsormadup
|
||||||
|
description: Parallel sorting and duplicate marking
|
||||||
|
keywords:
|
||||||
|
- markduplicates
|
||||||
|
- sort
|
||||||
|
- bam
|
||||||
|
- cram
|
||||||
|
tools:
|
||||||
|
- biobambam:
|
||||||
|
description: |
|
||||||
|
biobambam is a set of tools for early stage alignment file processing.
|
||||||
|
homepage: https://gitlab.com/german.tischler/biobambam2
|
||||||
|
documentation: https://gitlab.com/german.tischler/biobambam2/-/blob/master/README.md
|
||||||
|
doi: 10.1186/1751-0473-9-13
|
||||||
|
licence: ["GPL v3"]
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bams:
|
||||||
|
type: file
|
||||||
|
description: List containing 1 or more bam files
|
||||||
|
- fasta:
|
||||||
|
type: file
|
||||||
|
description: Reference genome in FASTA format (optional)
|
||||||
|
pattern: "*.{fa,fasta}"
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: BAM/CRAM file with duplicate reads marked/removed
|
||||||
|
pattern: "*.{bam,cram}"
|
||||||
|
- bam_index:
|
||||||
|
type: file
|
||||||
|
description: BAM index file
|
||||||
|
pattern: "*.{bai}"
|
||||||
|
- metrics:
|
||||||
|
type: file
|
||||||
|
description: Duplicate metrics file generated by biobambam
|
||||||
|
pattern: "*.{metrics.txt}"
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
authors:
|
||||||
|
- "@matthdsm"
|
|
@ -2,10 +2,10 @@ process BOWTIE_ALIGN {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_high'
|
label 'process_high'
|
||||||
|
|
||||||
conda (params.enable_conda ? 'bioconda::bowtie=1.3.0 bioconda::samtools=1.11' : null)
|
conda (params.enable_conda ? 'bioconda::bowtie=1.3.0 bioconda::samtools=1.15.1' : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/mulled-v2-ffbf83a6b0ab6ec567a336cf349b80637135bca3:9e14e16c284d6860574cf5b624bbc44c793cb024-0' :
|
'https://depot.galaxyproject.org/singularity/mulled-v2-ffbf83a6b0ab6ec567a336cf349b80637135bca3:676c5bcfe34af6097728fea60fb7ea83f94a4a5f-0' :
|
||||||
'quay.io/biocontainers/mulled-v2-ffbf83a6b0ab6ec567a336cf349b80637135bca3:9e14e16c284d6860574cf5b624bbc44c793cb024-0' }"
|
'quay.io/biocontainers/mulled-v2-ffbf83a6b0ab6ec567a336cf349b80637135bca3:676c5bcfe34af6097728fea60fb7ea83f94a4a5f-0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(reads)
|
tuple val(meta), path(reads)
|
||||||
|
|
|
@ -1,67 +1,62 @@
|
||||||
process BOWTIE2_ALIGN {
|
process BOWTIE2_ALIGN {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_high'
|
label "process_high"
|
||||||
|
|
||||||
conda (params.enable_conda ? 'bioconda::bowtie2=2.4.4 bioconda::samtools=1.14 conda-forge::pigz=2.6' : null)
|
conda (params.enable_conda ? "bioconda::bowtie2=2.4.4 bioconda::samtools=1.15.1 conda-forge::pigz=2.6" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == "singularity" && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/mulled-v2-ac74a7f02cebcfcc07d8e8d1d750af9c83b4d45a:4d235f41348a00533f18e47c9669f1ecb327f629-0' :
|
"https://depot.galaxyproject.org/singularity/mulled-v2-ac74a7f02cebcfcc07d8e8d1d750af9c83b4d45a:1744f68fe955578c63054b55309e05b41c37a80d-0" :
|
||||||
'quay.io/biocontainers/mulled-v2-ac74a7f02cebcfcc07d8e8d1d750af9c83b4d45a:4d235f41348a00533f18e47c9669f1ecb327f629-0' }"
|
"quay.io/biocontainers/mulled-v2-ac74a7f02cebcfcc07d8e8d1d750af9c83b4d45a:1744f68fe955578c63054b55309e05b41c37a80d-0" }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(reads)
|
tuple val(meta), path(reads)
|
||||||
path index
|
path index
|
||||||
val save_unaligned
|
val save_unaligned
|
||||||
|
val sort_bam
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path('*.bam') , emit: bam
|
tuple val(meta), path("*.bam") , emit: bam
|
||||||
tuple val(meta), path('*.log') , emit: log
|
tuple val(meta), path("*.log") , emit: log
|
||||||
tuple val(meta), path('*fastq.gz'), emit: fastq, optional:true
|
tuple val(meta), path("*fastq.gz"), emit: fastq, optional:true
|
||||||
path "versions.yml" , emit: versions
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
when:
|
when:
|
||||||
task.ext.when == null || task.ext.when
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ""
|
||||||
def args2 = task.ext.args2 ?: ''
|
def args2 = task.ext.args2 ?: ""
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
if (meta.single_end) {
|
|
||||||
def unaligned = save_unaligned ? "--un-gz ${prefix}.unmapped.fastq.gz" : ''
|
|
||||||
"""
|
|
||||||
INDEX=`find -L ./ -name "*.rev.1.bt2" | sed 's/.rev.1.bt2//'`
|
|
||||||
bowtie2 \\
|
|
||||||
-x \$INDEX \\
|
|
||||||
-U $reads \\
|
|
||||||
--threads $task.cpus \\
|
|
||||||
$unaligned \\
|
|
||||||
$args \\
|
|
||||||
2> ${prefix}.bowtie2.log \\
|
|
||||||
| samtools view -@ $task.cpus $args2 -bhS -o ${prefix}.bam -
|
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
def unaligned = ""
|
||||||
"${task.process}":
|
def reads_args = ""
|
||||||
bowtie2: \$(echo \$(bowtie2 --version 2>&1) | sed 's/^.*bowtie2-align-s version //; s/ .*\$//')
|
if (meta.single_end) {
|
||||||
samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
|
unaligned = save_unaligned ? "--un-gz ${prefix}.unmapped.fastq.gz" : ""
|
||||||
pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
|
reads_args = "-U ${reads}"
|
||||||
END_VERSIONS
|
|
||||||
"""
|
|
||||||
} else {
|
} else {
|
||||||
def unaligned = save_unaligned ? "--un-conc-gz ${prefix}.unmapped.fastq.gz" : ''
|
unaligned = save_unaligned ? "--un-conc-gz ${prefix}.unmapped.fastq.gz" : ""
|
||||||
|
reads_args = "-1 ${reads[0]} -2 ${reads[1]}"
|
||||||
|
}
|
||||||
|
|
||||||
|
def samtools_command = sort_bam ? 'sort' : 'view'
|
||||||
|
|
||||||
"""
|
"""
|
||||||
INDEX=`find -L ./ -name "*.rev.1.bt2" | sed 's/.rev.1.bt2//'`
|
INDEX=`find -L ./ -name "*.rev.1.bt2" | sed "s/.rev.1.bt2//"`
|
||||||
|
[ -z "\$INDEX" ] && INDEX=`find -L ./ -name "*.rev.1.bt2l" | sed "s/.rev.1.bt2l//"`
|
||||||
|
[ -z "\$INDEX" ] && echo "Bowtie2 index files not found" 1>&2 && exit 1
|
||||||
|
|
||||||
bowtie2 \\
|
bowtie2 \\
|
||||||
-x \$INDEX \\
|
-x \$INDEX \\
|
||||||
-1 ${reads[0]} \\
|
$reads_args \\
|
||||||
-2 ${reads[1]} \\
|
|
||||||
--threads $task.cpus \\
|
--threads $task.cpus \\
|
||||||
$unaligned \\
|
$unaligned \\
|
||||||
$args \\
|
$args \\
|
||||||
2> ${prefix}.bowtie2.log \\
|
2> ${prefix}.bowtie2.log \\
|
||||||
| samtools view -@ $task.cpus $args2 -bhS -o ${prefix}.bam -
|
| samtools $samtools_command $args2 --threads $task.cpus -o ${prefix}.bam -
|
||||||
|
|
||||||
if [ -f ${prefix}.unmapped.fastq.1.gz ]; then
|
if [ -f ${prefix}.unmapped.fastq.1.gz ]; then
|
||||||
mv ${prefix}.unmapped.fastq.1.gz ${prefix}.unmapped_1.fastq.gz
|
mv ${prefix}.unmapped.fastq.1.gz ${prefix}.unmapped_1.fastq.gz
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ -f ${prefix}.unmapped.fastq.2.gz ]; then
|
if [ -f ${prefix}.unmapped.fastq.2.gz ]; then
|
||||||
mv ${prefix}.unmapped.fastq.2.gz ${prefix}.unmapped_2.fastq.gz
|
mv ${prefix}.unmapped.fastq.2.gz ${prefix}.unmapped_2.fastq.gz
|
||||||
fi
|
fi
|
||||||
|
@ -73,5 +68,4 @@ process BOWTIE2_ALIGN {
|
||||||
pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
|
pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -29,6 +29,15 @@ input:
|
||||||
type: file
|
type: file
|
||||||
description: Bowtie2 genome index files
|
description: Bowtie2 genome index files
|
||||||
pattern: "*.ebwt"
|
pattern: "*.ebwt"
|
||||||
|
- save_unaligned:
|
||||||
|
type: boolean
|
||||||
|
description: |
|
||||||
|
Save reads that do not map to the reference (true) or discard them (false)
|
||||||
|
(default: false)
|
||||||
|
- sort_bam:
|
||||||
|
type: boolean
|
||||||
|
description: use samtools sort (true) or samtools view (false)
|
||||||
|
pattern: "true or false"
|
||||||
output:
|
output:
|
||||||
- bam:
|
- bam:
|
||||||
type: file
|
type: file
|
||||||
|
|
84
modules/busco/main.nf
Normal file
84
modules/busco/main.nf
Normal file
|
@ -0,0 +1,84 @@
|
||||||
|
process BUSCO {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_medium'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::busco=5.3.2" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/busco:5.3.2--pyhdfd78af_0':
|
||||||
|
'quay.io/biocontainers/busco:5.3.2--pyhdfd78af_0' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path('tmp_input/*')
|
||||||
|
each lineage // Required: lineage to check against, "auto" enables --auto-lineage instead
|
||||||
|
path busco_lineages_path // Recommended: path to busco lineages - downloads if not set
|
||||||
|
path config_file // Optional: busco configuration file
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("*-busco.batch_summary.txt"), emit: batch_summary
|
||||||
|
tuple val(meta), path("short_summary.*.txt") , emit: short_summaries_txt, optional: true
|
||||||
|
tuple val(meta), path("short_summary.*.json") , emit: short_summaries_json, optional: true
|
||||||
|
tuple val(meta), path("*-busco") , emit: busco_dir
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}-${lineage}"
|
||||||
|
def busco_config = config_file ? "--config $config_file" : ''
|
||||||
|
def busco_lineage = lineage.equals('auto') ? '--auto-lineage' : "--lineage_dataset ${lineage}"
|
||||||
|
def busco_lineage_dir = busco_lineages_path ? "--offline --download_path ${busco_lineages_path}" : ''
|
||||||
|
"""
|
||||||
|
# Nextflow changes the container --entrypoint to /bin/bash (container default entrypoint: /usr/local/env-execute)
|
||||||
|
# Check for container variable initialisation script and source it.
|
||||||
|
if [ -f "/usr/local/env-activate.sh" ]; then
|
||||||
|
set +u # Otherwise, errors out because of various unbound variables
|
||||||
|
. "/usr/local/env-activate.sh"
|
||||||
|
set -u
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If the augustus config directory is not writable, then copy to writeable area
|
||||||
|
if [ ! -w "\${AUGUSTUS_CONFIG_PATH}" ]; then
|
||||||
|
# Create writable tmp directory for augustus
|
||||||
|
AUG_CONF_DIR=\$( mktemp -d -p \$PWD )
|
||||||
|
cp -r \$AUGUSTUS_CONFIG_PATH/* \$AUG_CONF_DIR
|
||||||
|
export AUGUSTUS_CONFIG_PATH=\$AUG_CONF_DIR
|
||||||
|
echo "New AUGUSTUS_CONFIG_PATH=\${AUGUSTUS_CONFIG_PATH}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Ensure the input is uncompressed
|
||||||
|
INPUT_SEQS=input_seqs
|
||||||
|
mkdir "\$INPUT_SEQS"
|
||||||
|
cd "\$INPUT_SEQS"
|
||||||
|
for FASTA in ../tmp_input/*; do
|
||||||
|
if [ "\${FASTA##*.}" == 'gz' ]; then
|
||||||
|
gzip -cdf "\$FASTA" > \$( basename "\$FASTA" .gz )
|
||||||
|
else
|
||||||
|
ln -s "\$FASTA" .
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
cd ..
|
||||||
|
|
||||||
|
busco \\
|
||||||
|
--cpu $task.cpus \\
|
||||||
|
--in "\$INPUT_SEQS" \\
|
||||||
|
--out ${prefix}-busco \\
|
||||||
|
$busco_lineage \\
|
||||||
|
$busco_lineage_dir \\
|
||||||
|
$busco_config \\
|
||||||
|
$args
|
||||||
|
|
||||||
|
# clean up
|
||||||
|
rm -rf "\$INPUT_SEQS"
|
||||||
|
|
||||||
|
# Move files to avoid staging/publishing issues
|
||||||
|
mv ${prefix}-busco/batch_summary.txt ${prefix}-busco.batch_summary.txt
|
||||||
|
mv ${prefix}-busco/*/short_summary.*.{json,txt} . || echo "Short summaries were not available: No genes were found."
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
busco: \$( busco --version 2>&1 | sed 's/^BUSCO //' )
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
69
modules/busco/meta.yml
Normal file
69
modules/busco/meta.yml
Normal file
|
@ -0,0 +1,69 @@
|
||||||
|
name: busco
|
||||||
|
description: Benchmarking Universal Single Copy Orthologs
|
||||||
|
keywords:
|
||||||
|
- quality control
|
||||||
|
- genome
|
||||||
|
- transcriptome
|
||||||
|
- proteome
|
||||||
|
tools:
|
||||||
|
- busco:
|
||||||
|
description: BUSCO provides measures for quantitative assessment of genome assembly, gene set, and transcriptome completeness based on evolutionarily informed expectations of gene content from near-universal single-copy orthologs selected from OrthoDB.
|
||||||
|
homepage: https://busco.ezlab.org/
|
||||||
|
documentation: https://busco.ezlab.org/busco_userguide.html
|
||||||
|
tool_dev_url: https://gitlab.com/ezlab/busco
|
||||||
|
doi: "10.1007/978-1-4939-9173-0_14"
|
||||||
|
licence: ["MIT"]
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- fasta:
|
||||||
|
type: file
|
||||||
|
description: Nucleic or amino acid sequence file in FASTA format.
|
||||||
|
pattern: "*.{fasta,fna,fa,fasta.gz,fna.gz,fa.gz}"
|
||||||
|
- lineage:
|
||||||
|
type: value
|
||||||
|
description: The BUSCO lineage to use, or "auto" to automatically select lineage
|
||||||
|
- busco_lineages_path:
|
||||||
|
type: directory
|
||||||
|
description: Path to local BUSCO lineages directory.
|
||||||
|
- config_file:
|
||||||
|
type: file
|
||||||
|
description: Path to BUSCO config file.
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- batch_summary:
|
||||||
|
type: file
|
||||||
|
description: Summary of all sequence files analyzed
|
||||||
|
pattern: "*-busco.batch_summary.txt"
|
||||||
|
- short_summaries_txt:
|
||||||
|
type: file
|
||||||
|
description: Short Busco summary in plain text format
|
||||||
|
pattern: "short_summary.*.txt"
|
||||||
|
- short_summaries_json:
|
||||||
|
type: file
|
||||||
|
description: Short Busco summary in JSON format
|
||||||
|
pattern: "short_summary.*.json"
|
||||||
|
- busco_dir:
|
||||||
|
type: directory
|
||||||
|
description: BUSCO lineage specific output
|
||||||
|
pattern: "*-busco"
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@priyanka-surana"
|
||||||
|
- "@charles-plessy"
|
||||||
|
- "@mahesh-panchal"
|
||||||
|
- "@muffato"
|
||||||
|
- "@jvhagey"
|
|
@ -2,10 +2,10 @@ process BWA_MEM {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_high'
|
label 'process_high'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::bwa=0.7.17 bioconda::samtools=1.15" : null)
|
conda (params.enable_conda ? "bioconda::bwa=0.7.17 bioconda::samtools=1.15.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:c56a3aabc8d64e52d5b9da1e8ecec2031668596d-0' :
|
'https://depot.galaxyproject.org/singularity/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:8110a70be2bfe7f75a2ea7f2a89cda4cc7732095-0' :
|
||||||
'quay.io/biocontainers/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:c56a3aabc8d64e52d5b9da1e8ecec2031668596d-0' }"
|
'quay.io/biocontainers/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:8110a70be2bfe7f75a2ea7f2a89cda4cc7732095-0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(reads)
|
tuple val(meta), path(reads)
|
||||||
|
@ -23,14 +23,12 @@ process BWA_MEM {
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def args2 = task.ext.args2 ?: ''
|
def args2 = task.ext.args2 ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
def read_group = meta.read_group ? "-R ${meta.read_group}" : ""
|
|
||||||
def samtools_command = sort_bam ? 'sort' : 'view'
|
def samtools_command = sort_bam ? 'sort' : 'view'
|
||||||
"""
|
"""
|
||||||
INDEX=`find -L ./ -name "*.amb" | sed 's/.amb//'`
|
INDEX=`find -L ./ -name "*.amb" | sed 's/.amb//'`
|
||||||
|
|
||||||
bwa mem \\
|
bwa mem \\
|
||||||
$args \\
|
$args \\
|
||||||
$read_group \\
|
|
||||||
-t $task.cpus \\
|
-t $task.cpus \\
|
||||||
\$INDEX \\
|
\$INDEX \\
|
||||||
$reads \\
|
$reads \\
|
||||||
|
|
|
@ -2,10 +2,10 @@ process BWA_SAMPE {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::bwa=0.7.17 bioconda::samtools=1.15" : null)
|
conda (params.enable_conda ? "bioconda::bwa=0.7.17 bioconda::samtools=1.15.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:c56a3aabc8d64e52d5b9da1e8ecec2031668596d-0' :
|
'https://depot.galaxyproject.org/singularity/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:8110a70be2bfe7f75a2ea7f2a89cda4cc7732095-0' :
|
||||||
'quay.io/biocontainers/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:c56a3aabc8d64e52d5b9da1e8ecec2031668596d-0' }"
|
'quay.io/biocontainers/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:8110a70be2bfe7f75a2ea7f2a89cda4cc7732095-0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(reads), path(sai)
|
tuple val(meta), path(reads), path(sai)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process BWA_SAMSE {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::bwa=0.7.17 bioconda::samtools=1.15" : null)
|
conda (params.enable_conda ? "bioconda::bwa=0.7.17 bioconda::samtools=1.15.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:c56a3aabc8d64e52d5b9da1e8ecec2031668596d-0' :
|
'https://depot.galaxyproject.org/singularity/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:8110a70be2bfe7f75a2ea7f2a89cda4cc7732095-0' :
|
||||||
'quay.io/biocontainers/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:c56a3aabc8d64e52d5b9da1e8ecec2031668596d-0' }"
|
'quay.io/biocontainers/mulled-v2-fe8faa35dbf6dc65a0f7f5d4ea12e31a79f73e40:8110a70be2bfe7f75a2ea7f2a89cda4cc7732095-0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(reads), path(sai)
|
tuple val(meta), path(reads), path(sai)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process BWAMEM2_MEM {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_high'
|
label 'process_high'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::bwa-mem2=2.2.1 bioconda::samtools=1.15" : null)
|
conda (params.enable_conda ? "bioconda::bwa-mem2=2.2.1 bioconda::samtools=1.15.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/mulled-v2-e5d375990341c5aef3c9aff74f96f66f65375ef6:8ee25ae85d7a2bacac3e3139db209aff3d605a18-0' :
|
'https://depot.galaxyproject.org/singularity/mulled-v2-e5d375990341c5aef3c9aff74f96f66f65375ef6:38aed4501da19db366dc7c8d52d31d94e760cfaf-0' :
|
||||||
'quay.io/biocontainers/mulled-v2-e5d375990341c5aef3c9aff74f96f66f65375ef6:8ee25ae85d7a2bacac3e3139db209aff3d605a18-0' }"
|
'quay.io/biocontainers/mulled-v2-e5d375990341c5aef3c9aff74f96f66f65375ef6:38aed4501da19db366dc7c8d52d31d94e760cfaf-0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(reads)
|
tuple val(meta), path(reads)
|
||||||
|
@ -23,7 +23,6 @@ process BWAMEM2_MEM {
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def args2 = task.ext.args2 ?: ''
|
def args2 = task.ext.args2 ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
def read_group = meta.read_group ? "-R ${meta.read_group}" : ""
|
|
||||||
def samtools_command = sort_bam ? 'sort' : 'view'
|
def samtools_command = sort_bam ? 'sort' : 'view'
|
||||||
"""
|
"""
|
||||||
INDEX=`find -L ./ -name "*.amb" | sed 's/.amb//'`
|
INDEX=`find -L ./ -name "*.amb" | sed 's/.amb//'`
|
||||||
|
@ -31,7 +30,6 @@ process BWAMEM2_MEM {
|
||||||
bwa-mem2 \\
|
bwa-mem2 \\
|
||||||
mem \\
|
mem \\
|
||||||
$args \\
|
$args \\
|
||||||
$read_group \\
|
|
||||||
-t $task.cpus \\
|
-t $task.cpus \\
|
||||||
\$INDEX \\
|
\$INDEX \\
|
||||||
$reads \\
|
$reads \\
|
||||||
|
|
|
@ -4,8 +4,8 @@ process CAT_FASTQ {
|
||||||
|
|
||||||
conda (params.enable_conda ? "conda-forge::sed=4.7" : null)
|
conda (params.enable_conda ? "conda-forge::sed=4.7" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://containers.biocontainers.pro/s3/SingImgsRepo/biocontainers/v1.2.0_cv1/biocontainers_v1.2.0_cv1.img' :
|
'https://depot.galaxyproject.org/singularity/ubuntu:20.04' :
|
||||||
'biocontainers/biocontainers:v1.2.0_cv1' }"
|
'ubuntu:20.04' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(reads, stageAs: "input*/*")
|
tuple val(meta), path(reads, stageAs: "input*/*")
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
process CENTRIFUGE {
|
process CENTRIFUGE_CENTRIFUGE {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_high'
|
label 'process_high'
|
||||||
|
|
||||||
|
@ -17,7 +17,6 @@ process CENTRIFUGE {
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path('*report.txt') , emit: report
|
tuple val(meta), path('*report.txt') , emit: report
|
||||||
tuple val(meta), path('*results.txt') , emit: results
|
tuple val(meta), path('*results.txt') , emit: results
|
||||||
tuple val(meta), path('*kreport.txt') , emit: kreport
|
|
||||||
tuple val(meta), path('*.sam') , optional: true, emit: sam
|
tuple val(meta), path('*.sam') , optional: true, emit: sam
|
||||||
tuple val(meta), path('*.mapped.fastq{,.1,.2}.gz') , optional: true, emit: fastq_mapped
|
tuple val(meta), path('*.mapped.fastq{,.1,.2}.gz') , optional: true, emit: fastq_mapped
|
||||||
tuple val(meta), path('*.unmapped.fastq{,.1,.2}.gz') , optional: true, emit: fastq_unmapped
|
tuple val(meta), path('*.unmapped.fastq{,.1,.2}.gz') , optional: true, emit: fastq_unmapped
|
||||||
|
@ -30,7 +29,6 @@ process CENTRIFUGE {
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
def paired = meta.single_end ? "-U ${reads}" : "-1 ${reads[0]} -2 ${reads[1]}"
|
def paired = meta.single_end ? "-U ${reads}" : "-1 ${reads[0]} -2 ${reads[1]}"
|
||||||
def db_name = db.toString().replace(".tar.gz","")
|
|
||||||
def unaligned = ''
|
def unaligned = ''
|
||||||
def aligned = ''
|
def aligned = ''
|
||||||
if (meta.single_end) {
|
if (meta.single_end) {
|
||||||
|
@ -42,9 +40,10 @@ process CENTRIFUGE {
|
||||||
}
|
}
|
||||||
def sam_output = sam_format ? "--out-fmt 'sam'" : ''
|
def sam_output = sam_format ? "--out-fmt 'sam'" : ''
|
||||||
"""
|
"""
|
||||||
tar -xf $db
|
## we add "-no-name ._" to ensure silly Mac OSX metafiles files aren't included
|
||||||
|
db_name=`find -L ${db} -name "*.1.cf" -not -name "._*" | sed 's/.1.cf//'`
|
||||||
centrifuge \\
|
centrifuge \\
|
||||||
-x $db_name \\
|
-x \$db_name \\
|
||||||
-p $task.cpus \\
|
-p $task.cpus \\
|
||||||
$paired \\
|
$paired \\
|
||||||
--report-file ${prefix}.report.txt \\
|
--report-file ${prefix}.report.txt \\
|
||||||
|
@ -53,7 +52,6 @@ process CENTRIFUGE {
|
||||||
$aligned \\
|
$aligned \\
|
||||||
$sam_output \\
|
$sam_output \\
|
||||||
$args
|
$args
|
||||||
centrifuge-kreport -x $db_name ${prefix}.results.txt > ${prefix}.kreport.txt
|
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
|
@ -1,4 +1,4 @@
|
||||||
name: centrifuge
|
name: centrifuge_centrifuge
|
||||||
description: Classifies metagenomic sequence data
|
description: Classifies metagenomic sequence data
|
||||||
keywords:
|
keywords:
|
||||||
- classify
|
- classify
|
||||||
|
@ -25,8 +25,7 @@ input:
|
||||||
respectively.
|
respectively.
|
||||||
- db:
|
- db:
|
||||||
type: directory
|
type: directory
|
||||||
description: Centrifuge database in .tar.gz format
|
description: Path to directory containing centrifuge database files
|
||||||
pattern: "*.tar.gz"
|
|
||||||
- save_unaligned:
|
- save_unaligned:
|
||||||
type: value
|
type: value
|
||||||
description: If true unmapped fastq files are saved
|
description: If true unmapped fastq files are saved
|
||||||
|
@ -49,12 +48,6 @@ output:
|
||||||
description: |
|
description: |
|
||||||
File containing classification results
|
File containing classification results
|
||||||
pattern: "*.{results.txt}"
|
pattern: "*.{results.txt}"
|
||||||
- kreport:
|
|
||||||
type: file
|
|
||||||
description: |
|
|
||||||
File containing kraken-style report from centrifuge
|
|
||||||
out files.
|
|
||||||
pattern: "*.{kreport.txt}"
|
|
||||||
- fastq_unmapped:
|
- fastq_unmapped:
|
||||||
type: file
|
type: file
|
||||||
description: Unmapped fastq files
|
description: Unmapped fastq files
|
33
modules/centrifuge/kreport/main.nf
Normal file
33
modules/centrifuge/kreport/main.nf
Normal file
|
@ -0,0 +1,33 @@
|
||||||
|
process CENTRIFUGE_KREPORT {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::centrifuge=1.0.4_beta" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/centrifuge:1.0.4_beta--h9a82719_6':
|
||||||
|
'quay.io/biocontainers/centrifuge:1.0.4_beta--h9a82719_6' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(results)
|
||||||
|
path db
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path('*.txt') , emit: kreport
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
db_name=`find -L ${db} -name "*.1.cf" -not -name "._*" | sed 's/.1.cf//'`
|
||||||
|
centrifuge-kreport -x \$db_name ${results} > ${prefix}.txt
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
centrifuge: \$( centrifuge --version | sed -n 1p | sed 's/^.*centrifuge-class version //')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
41
modules/centrifuge/kreport/meta.yml
Normal file
41
modules/centrifuge/kreport/meta.yml
Normal file
|
@ -0,0 +1,41 @@
|
||||||
|
name: "centrifuge_kreport"
|
||||||
|
description: Creates Kraken-style reports from centrifuge out files
|
||||||
|
keywords:
|
||||||
|
- metagenomics
|
||||||
|
tools:
|
||||||
|
- centrifuge:
|
||||||
|
description: Centrifuge is a classifier for metagenomic sequences.
|
||||||
|
homepage: https://ccb.jhu.edu/software/centrifuge/
|
||||||
|
documentation: https://ccb.jhu.edu/software/centrifuge/manual.shtml
|
||||||
|
doi: 10.1101/gr.210641.116
|
||||||
|
licence: ["GPL v3"]
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- results:
|
||||||
|
type: file
|
||||||
|
description: File containing the centrifuge classification results
|
||||||
|
pattern: "*.{txt}"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- kreport:
|
||||||
|
type: file
|
||||||
|
description: |
|
||||||
|
File containing kraken-style report from centrifuge
|
||||||
|
out files.
|
||||||
|
pattern: "*.{txt}"
|
||||||
|
authors:
|
||||||
|
- "@sofstam"
|
||||||
|
- "@jfy133"
|
|
@ -2,10 +2,10 @@ process CHROMAP_CHROMAP {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::chromap=0.2.1 bioconda::samtools=1.15" : null)
|
conda (params.enable_conda ? "bioconda::chromap=0.2.1 bioconda::samtools=1.15.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/mulled-v2-1f09f39f20b1c4ee36581dc81cc323c70e661633:bd74d08a359024829a7aec1638a28607bbcd8a58-0' :
|
'https://depot.galaxyproject.org/singularity/mulled-v2-1f09f39f20b1c4ee36581dc81cc323c70e661633:963e4fe6a85c548a4018585660aed79780a175d3-0' :
|
||||||
'quay.io/biocontainers/mulled-v2-1f09f39f20b1c4ee36581dc81cc323c70e661633:bd74d08a359024829a7aec1638a28607bbcd8a58-0' }"
|
'quay.io/biocontainers/mulled-v2-1f09f39f20b1c4ee36581dc81cc323c70e661633:963e4fe6a85c548a4018585660aed79780a175d3-0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(reads)
|
tuple val(meta), path(reads)
|
||||||
|
|
36
modules/cnvkit/antitarget/main.nf
Normal file
36
modules/cnvkit/antitarget/main.nf
Normal file
|
@ -0,0 +1,36 @@
|
||||||
|
process CNVKIT_ANTITARGET {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::cnvkit=0.9.9" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/cnvkit:0.9.9--pyhdfd78af_0':
|
||||||
|
'quay.io/biocontainers/cnvkit:0.9.9--pyhdfd78af_0' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(targets)
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("*.bed"), emit: bed
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
|
||||||
|
"""
|
||||||
|
cnvkit.py \\
|
||||||
|
antitarget \\
|
||||||
|
$targets \\
|
||||||
|
--output ${prefix}.antitarget.bed \\
|
||||||
|
$args
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
cnvkit: \$(cnvkit.py version | sed -e "s/cnvkit v//g")
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
44
modules/cnvkit/antitarget/meta.yml
Normal file
44
modules/cnvkit/antitarget/meta.yml
Normal file
|
@ -0,0 +1,44 @@
|
||||||
|
name: cnvkit_antitarget
|
||||||
|
description:
|
||||||
|
keywords:
|
||||||
|
- cvnkit
|
||||||
|
- antitarget
|
||||||
|
tools:
|
||||||
|
- cnvkit:
|
||||||
|
description: |
|
||||||
|
CNVkit is a Python library and command-line software toolkit to infer and visualize copy number from high-throughput DNA sequencing data.
|
||||||
|
It is designed for use with hybrid capture, including both whole-exome and custom target panels, and short-read sequencing platforms such as Illumina and Ion Torrent.
|
||||||
|
homepage: https://cnvkit.readthedocs.io/en/stable/index.html
|
||||||
|
documentation: https://cnvkit.readthedocs.io/en/stable/index.html
|
||||||
|
tool_dev_url: "https://github.com/etal/cnvkit"
|
||||||
|
doi: 10.1371/journal.pcbi.1004873
|
||||||
|
licence: ["Apache-2.0"]
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- targets:
|
||||||
|
type: file
|
||||||
|
description: File containing genomic regions
|
||||||
|
pattern: "*.{bed}"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bed:
|
||||||
|
type: file
|
||||||
|
description: File containing off-target regions
|
||||||
|
pattern: "*.{bed}"
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@SusiJo"
|
|
@ -2,10 +2,10 @@ process CNVKIT_BATCH {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_low'
|
label 'process_low'
|
||||||
|
|
||||||
conda (params.enable_conda ? 'bioconda::cnvkit=0.9.9' : null)
|
conda (params.enable_conda ? 'bioconda::cnvkit=0.9.9 bioconda::samtools=1.15.1' : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/cnvkit:0.9.9--pyhdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/mulled-v2-780d630a9bb6a0ff2e7b6f730906fd703e40e98f:304d1c5ab610f216e77c61420ebe85f1e7c5968a-0' :
|
||||||
'quay.io/biocontainers/cnvkit:0.9.9--pyhdfd78af_0' }"
|
'quay.io/biocontainers/mulled-v2-780d630a9bb6a0ff2e7b6f730906fd703e40e98f:304d1c5ab610f216e77c61420ebe85f1e7c5968a-0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(tumor), path(normal)
|
tuple val(meta), path(tumor), path(normal)
|
||||||
|
@ -18,6 +18,8 @@ process CNVKIT_BATCH {
|
||||||
tuple val(meta), path("*.cnn"), emit: cnn, optional: true
|
tuple val(meta), path("*.cnn"), emit: cnn, optional: true
|
||||||
tuple val(meta), path("*.cnr"), emit: cnr, optional: true
|
tuple val(meta), path("*.cnr"), emit: cnr, optional: true
|
||||||
tuple val(meta), path("*.cns"), emit: cns, optional: true
|
tuple val(meta), path("*.cns"), emit: cns, optional: true
|
||||||
|
tuple val(meta), path("*.pdf"), emit: pdf, optional: true
|
||||||
|
tuple val(meta), path("*.png"), emit: png, optional: true
|
||||||
path "versions.yml" , emit: versions
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
when:
|
when:
|
||||||
|
@ -25,21 +27,39 @@ process CNVKIT_BATCH {
|
||||||
|
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def normal_args = normal ? "--normal $normal" : ""
|
|
||||||
def fasta_args = fasta ? "--fasta $fasta" : ""
|
// execute samtools only when cram files are input, cnvkit runs natively on bam but is prohibitively slow
|
||||||
|
// input pair is assumed to have same extension if both exist
|
||||||
|
def is_cram = tumor.Extension == "cram" ? true : false
|
||||||
|
def tumor_out = is_cram ? tumor.BaseName + ".bam" : "${tumor}"
|
||||||
|
|
||||||
|
// do not run samtools on normal samples in tumor_only mode
|
||||||
|
def normal_exists = normal ? true: false
|
||||||
|
// tumor_only mode does not need fasta & target
|
||||||
|
// instead it requires a pre-computed reference.cnn which is built from fasta & target
|
||||||
|
def (normal_out, normal_args, fasta_args) = ["", "", ""]
|
||||||
|
|
||||||
|
if (normal_exists){
|
||||||
|
def normal_prefix = normal.BaseName
|
||||||
|
normal_out = is_cram ? "${normal_prefix}" + ".bam" : "${normal}"
|
||||||
|
normal_args = normal_prefix ? "--normal $normal_out" : ""
|
||||||
|
fasta_args = fasta ? "--fasta $fasta" : ""
|
||||||
|
}
|
||||||
|
|
||||||
|
def target_args = targets ? "--targets $targets" : ""
|
||||||
def reference_args = reference ? "--reference $reference" : ""
|
def reference_args = reference ? "--reference $reference" : ""
|
||||||
|
|
||||||
def target_args = ""
|
|
||||||
if (args.contains("--method wgs") || args.contains("-m wgs")) {
|
|
||||||
target_args = targets ? "--targets $targets" : ""
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
target_args = "--targets $targets"
|
|
||||||
}
|
|
||||||
"""
|
"""
|
||||||
|
if $is_cram; then
|
||||||
|
samtools view -T $fasta $tumor -@ $task.cpus -o $tumor_out
|
||||||
|
if $normal_exists; then
|
||||||
|
samtools view -T $fasta $normal -@ $task.cpus -o $normal_out
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
cnvkit.py \\
|
cnvkit.py \\
|
||||||
batch \\
|
batch \\
|
||||||
$tumor \\
|
$tumor_out \\
|
||||||
$normal_args \\
|
$normal_args \\
|
||||||
$fasta_args \\
|
$fasta_args \\
|
||||||
$reference_args \\
|
$reference_args \\
|
||||||
|
|
|
@ -11,27 +11,6 @@ tools:
|
||||||
homepage: https://cnvkit.readthedocs.io/en/stable/index.html
|
homepage: https://cnvkit.readthedocs.io/en/stable/index.html
|
||||||
documentation: https://cnvkit.readthedocs.io/en/stable/index.html
|
documentation: https://cnvkit.readthedocs.io/en/stable/index.html
|
||||||
licence: ["Apache-2.0"]
|
licence: ["Apache-2.0"]
|
||||||
params:
|
|
||||||
- outdir:
|
|
||||||
type: string
|
|
||||||
description: |
|
|
||||||
The pipeline's output directory. By default, the module will
|
|
||||||
output files into `$params.outdir/<SOFTWARE>`
|
|
||||||
- publish_dir_mode:
|
|
||||||
type: string
|
|
||||||
description: |
|
|
||||||
Value for the Nextflow `publishDir` mode parameter.
|
|
||||||
Available: symlink, rellink, link, copy, copyNoFollow, move.
|
|
||||||
- enable_conda:
|
|
||||||
type: boolean
|
|
||||||
description: |
|
|
||||||
Run the module with Conda using the software specified
|
|
||||||
via the `conda` directive
|
|
||||||
- singularity_pull_docker_container:
|
|
||||||
type: boolean
|
|
||||||
description: |
|
|
||||||
Instead of directly downloading Singularity images for use with Singularity,
|
|
||||||
force the workflow to pull and convert Docker containers instead.
|
|
||||||
input:
|
input:
|
||||||
- meta:
|
- meta:
|
||||||
type: map
|
type: map
|
||||||
|
@ -49,7 +28,7 @@ input:
|
||||||
- fasta:
|
- fasta:
|
||||||
type: file
|
type: file
|
||||||
description: |
|
description: |
|
||||||
Input reference genome fasta file
|
Input reference genome fasta file (only needed for cram_input and/or when normal_samples are provided)
|
||||||
- targetfile:
|
- targetfile:
|
||||||
type: file
|
type: file
|
||||||
description: |
|
description: |
|
||||||
|
@ -80,6 +59,14 @@ output:
|
||||||
type: file
|
type: file
|
||||||
description: File containing copy number segment information
|
description: File containing copy number segment information
|
||||||
pattern: "*.{cns}"
|
pattern: "*.{cns}"
|
||||||
|
- pdf:
|
||||||
|
type: file
|
||||||
|
description: File with plot of copy numbers or segments on chromosomes
|
||||||
|
pattern: "*.{pdf}"
|
||||||
|
- png:
|
||||||
|
type: file
|
||||||
|
description: File with plot of bin-level log2 coverages and segmentation calls
|
||||||
|
pattern: "*.{png}"
|
||||||
- versions:
|
- versions:
|
||||||
type: file
|
type: file
|
||||||
description: File containing software versions
|
description: File containing software versions
|
||||||
|
@ -91,3 +78,4 @@ authors:
|
||||||
- "@drpatelh"
|
- "@drpatelh"
|
||||||
- "@fbdtemme"
|
- "@fbdtemme"
|
||||||
- "@lassefolkersen"
|
- "@lassefolkersen"
|
||||||
|
- "@SusiJo"
|
||||||
|
|
40
modules/cnvkit/reference/main.nf
Normal file
40
modules/cnvkit/reference/main.nf
Normal file
|
@ -0,0 +1,40 @@
|
||||||
|
process CNVKIT_REFERENCE {
|
||||||
|
tag "$fasta"
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::cnvkit=0.9.9" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/cnvkit:0.9.9--pyhdfd78af_0':
|
||||||
|
'quay.io/biocontainers/cnvkit:0.9.9--pyhdfd78af_0' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
path fasta
|
||||||
|
path targets
|
||||||
|
path antitargets
|
||||||
|
|
||||||
|
output:
|
||||||
|
path "*.cnn" , emit: cnn
|
||||||
|
path "versions.yml", emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: targets.BaseName
|
||||||
|
|
||||||
|
"""
|
||||||
|
cnvkit.py \\
|
||||||
|
reference \\
|
||||||
|
--fasta $fasta \\
|
||||||
|
--targets $targets \\
|
||||||
|
--antitargets $antitargets \\
|
||||||
|
--output ${prefix}.reference.cnn \\
|
||||||
|
$args
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
cnvkit: \$(cnvkit.py version | sed -e "s/cnvkit v//g")
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
47
modules/cnvkit/reference/meta.yml
Normal file
47
modules/cnvkit/reference/meta.yml
Normal file
|
@ -0,0 +1,47 @@
|
||||||
|
name: cnvkit_reference
|
||||||
|
description:
|
||||||
|
keywords:
|
||||||
|
- cnvkit
|
||||||
|
- reference
|
||||||
|
tools:
|
||||||
|
- cnvkit:
|
||||||
|
description: |
|
||||||
|
CNVkit is a Python library and command-line software toolkit to infer and visualize copy number from high-throughput DNA sequencing data.
|
||||||
|
It is designed for use with hybrid capture, including both whole-exome and custom target panels, and short-read sequencing platforms such as Illumina and Ion Torrent.
|
||||||
|
homepage: https://cnvkit.readthedocs.io/en/stable/index.html
|
||||||
|
documentation: https://cnvkit.readthedocs.io/en/stable/index.html
|
||||||
|
tool_dev_url: https://github.com/etal/cnvkit
|
||||||
|
doi: 10.1371/journal.pcbi.1004873
|
||||||
|
licence: ["Apache-2.0"]
|
||||||
|
|
||||||
|
input:
|
||||||
|
- fasta:
|
||||||
|
type: file
|
||||||
|
description: File containing reference genome
|
||||||
|
pattern: "*.{fasta}"
|
||||||
|
- targets:
|
||||||
|
type: file
|
||||||
|
description: File containing genomic regions
|
||||||
|
pattern: "*.{bed}"
|
||||||
|
- antitargets:
|
||||||
|
type: file
|
||||||
|
description: File containing off-target genomic regions
|
||||||
|
pattern: "*.{bed}"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- reference:
|
||||||
|
type: file
|
||||||
|
description: File containing a copy-number reference (required for CNV calling in tumor_only mode)
|
||||||
|
pattern: "*.{cnn}"
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@SusiJo"
|
|
@ -2,32 +2,42 @@ process CNVPYTOR_CALLCNVS {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::cnvpytor=1.0" : null)
|
conda (params.enable_conda ? "bioconda::cnvpytor=1.2.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/cnvpytor:1.0--py39h6a678da_2':
|
'https://depot.galaxyproject.org/singularity/cnvpytor:1.2.1--pyhdfd78af_0':
|
||||||
'quay.io/biocontainers/cnvpytor:1.0--py39h6a678da_2' }"
|
'quay.io/biocontainers/cnvpytor:1.2.1--pyhdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(pytor)
|
tuple val(meta), path(pytor)
|
||||||
|
val bin_sizes
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path("*.tsv"), emit: cnvs
|
tuple val(meta), path("${pytor.baseName}.pytor") , emit: pytor
|
||||||
path "versions.yml" , emit: versions
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
when:
|
when:
|
||||||
task.ext.when == null || task.ext.when
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: '1000'
|
def bins = bin_sizes ?: '1000'
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
|
||||||
"""
|
"""
|
||||||
cnvpytor \\
|
cnvpytor \\
|
||||||
-root $pytor \\
|
-root $pytor \\
|
||||||
-call $args > ${prefix}.tsv
|
-call $bin_sizes
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/^.*pyCNVnator //; s/Using.*\$//' ))
|
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/CNVpytor //' ))
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
"""
|
||||||
|
touch ${pytor.baseName}.pytor
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/CNVpytor //' ))
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
}
|
}
|
||||||
|
|
|
@ -17,8 +17,11 @@ input:
|
||||||
e.g. [ id:'test']
|
e.g. [ id:'test']
|
||||||
- pytor:
|
- pytor:
|
||||||
type: file
|
type: file
|
||||||
description: cnvpytor root file
|
description: pytor file containing partitions of read depth histograms using mean-shift method
|
||||||
pattern: "*.{pytor}"
|
pattern: "*.{pytor}"
|
||||||
|
- bin_sizes:
|
||||||
|
type: string
|
||||||
|
description: list of binsizes separated by space e.g. "1000 10000" and "1000"
|
||||||
|
|
||||||
output:
|
output:
|
||||||
- meta:
|
- meta:
|
||||||
|
@ -26,10 +29,10 @@ output:
|
||||||
description: |
|
description: |
|
||||||
Groovy Map containing sample information
|
Groovy Map containing sample information
|
||||||
e.g. [ id:'test' ]
|
e.g. [ id:'test' ]
|
||||||
- cnvs:
|
- pytor:
|
||||||
type: file
|
type: file
|
||||||
description: file containing identified copy numer variations
|
description: pytor files containing cnv calls
|
||||||
pattern: "*.{tsv}"
|
pattern: "*.{pytor}"
|
||||||
- versions:
|
- versions:
|
||||||
type: file
|
type: file
|
||||||
description: File containing software versions
|
description: File containing software versions
|
||||||
|
|
|
@ -2,13 +2,15 @@ process CNVPYTOR_HISTOGRAM {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::cnvpytor=1.0" : null)
|
conda (params.enable_conda ? "bioconda::cnvpytor=1.2.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/cnvpytor:1.0--py39h6a678da_2':
|
'https://depot.galaxyproject.org/singularity/cnvpytor:1.2.1--pyhdfd78af_0':
|
||||||
'quay.io/biocontainers/cnvpytor:1.0--py39h6a678da_2' }"
|
'quay.io/biocontainers/cnvpytor:1.2.1--pyhdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(pytor)
|
tuple val(meta), path(pytor)
|
||||||
|
val bin_sizes
|
||||||
|
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path("${pytor.baseName}.pytor") , emit: pytor
|
tuple val(meta), path("${pytor.baseName}.pytor") , emit: pytor
|
||||||
|
@ -18,15 +20,25 @@ process CNVPYTOR_HISTOGRAM {
|
||||||
task.ext.when == null || task.ext.when
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: '1000'
|
def bins = bin_sizes ?: '1000'
|
||||||
"""
|
"""
|
||||||
cnvpytor \\
|
cnvpytor \\
|
||||||
-root $pytor \\
|
-root $pytor \\
|
||||||
-his $args
|
-his $bins
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/^.*pyCNVnator //; s/Using.*\$//' ))
|
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/CNVpytor //' ))
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
"""
|
||||||
|
touch ${pytor.baseName}.pytor
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/CNVpytor //' ))
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
}
|
}
|
||||||
|
|
|
@ -22,6 +22,9 @@ input:
|
||||||
type: file
|
type: file
|
||||||
description: pytor file containing read depth data
|
description: pytor file containing read depth data
|
||||||
pattern: "*.{pytor}"
|
pattern: "*.{pytor}"
|
||||||
|
- bin_sizes:
|
||||||
|
type: string
|
||||||
|
description: list of binsizes separated by space e.g. "1000 10000" and "1000"
|
||||||
|
|
||||||
output:
|
output:
|
||||||
- meta:
|
- meta:
|
||||||
|
@ -40,3 +43,4 @@ output:
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@sima-r"
|
- "@sima-r"
|
||||||
|
- "@ramprasadn"
|
||||||
|
|
|
@ -2,10 +2,10 @@ process CNVPYTOR_IMPORTREADDEPTH {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::cnvpytor=1.0" : null)
|
conda (params.enable_conda ? "bioconda::cnvpytor=1.2.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/cnvpytor:1.0--py39h6a678da_2':
|
'https://depot.galaxyproject.org/singularity/cnvpytor:1.2.1--pyhdfd78af_0':
|
||||||
'quay.io/biocontainers/cnvpytor:1.0--py39h6a678da_2' }"
|
'quay.io/biocontainers/cnvpytor:1.2.1--pyhdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(input_file), path(index)
|
tuple val(meta), path(input_file), path(index)
|
||||||
|
@ -32,7 +32,18 @@ process CNVPYTOR_IMPORTREADDEPTH {
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/^.*pyCNVnator //; s/Using.*\$//' ))
|
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/CNVpytor //' ))
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
touch ${prefix}.pytor
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/CNVpytor //' ))
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
}
|
}
|
||||||
|
|
|
@ -52,3 +52,4 @@ output:
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@sima-r"
|
- "@sima-r"
|
||||||
|
- "@ramprasadn"
|
||||||
|
|
|
@ -2,13 +2,14 @@ process CNVPYTOR_PARTITION {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::cnvpytor=1.0" : null)
|
conda (params.enable_conda ? "bioconda::cnvpytor=1.2.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/cnvpytor:1.0--py39h6a678da_2':
|
'https://depot.galaxyproject.org/singularity/cnvpytor:1.2.1--pyhdfd78af_0':
|
||||||
'quay.io/biocontainers/cnvpytor:1.0--py39h6a678da_2' }"
|
'quay.io/biocontainers/cnvpytor:1.2.1--pyhdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(pytor)
|
tuple val(meta), path(pytor)
|
||||||
|
val bin_sizes
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path("${pytor.baseName}.pytor"), emit: pytor
|
tuple val(meta), path("${pytor.baseName}.pytor"), emit: pytor
|
||||||
|
@ -18,15 +19,25 @@ process CNVPYTOR_PARTITION {
|
||||||
task.ext.when == null || task.ext.when
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: '1000'
|
def bins = bin_sizes ?: '1000'
|
||||||
"""
|
"""
|
||||||
cnvpytor \\
|
cnvpytor \\
|
||||||
-root $pytor \\
|
-root $pytor \\
|
||||||
-partition $args
|
-partition $bins
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/^.*pyCNVnator //; s/Using.*\$//' ))
|
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/CNVpytor //' ))
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
"""
|
||||||
|
touch ${pytor.baseName}.pytor
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/CNVpytor //' ))
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
}
|
}
|
||||||
|
|
|
@ -22,6 +22,9 @@ input:
|
||||||
type: file
|
type: file
|
||||||
description: pytor file containing read depth data
|
description: pytor file containing read depth data
|
||||||
pattern: "*.{pytor}"
|
pattern: "*.{pytor}"
|
||||||
|
- bin_sizes:
|
||||||
|
type: string
|
||||||
|
description: list of binsizes separated by space e.g. "1000 10000" and "1000"
|
||||||
|
|
||||||
output:
|
output:
|
||||||
- meta:
|
- meta:
|
||||||
|
@ -40,3 +43,4 @@ output:
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@sima-r"
|
- "@sima-r"
|
||||||
|
- "@ramprasadn"
|
||||||
|
|
60
modules/cnvpytor/view/main.nf
Normal file
60
modules/cnvpytor/view/main.nf
Normal file
|
@ -0,0 +1,60 @@
|
||||||
|
process CNVPYTOR_VIEW {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_medium'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::cnvpytor=1.2.1" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/cnvpytor:1.2.1--pyhdfd78af_0':
|
||||||
|
'quay.io/biocontainers/cnvpytor:1.2.1--pyhdfd78af_0' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(pytor_files)
|
||||||
|
val bin_sizes
|
||||||
|
val output_format
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("*.vcf"), emit: vcf , optional: true
|
||||||
|
tuple val(meta), path("*.tsv"), emit: tsv , optional: true
|
||||||
|
tuple val(meta), path("*.xls"), emit: xls , optional: true
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def output_suffix = output_format ?: 'vcf'
|
||||||
|
def bins = bin_sizes ?: '1000'
|
||||||
|
def input = pytor_files.join(" ")
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
|
||||||
|
python3 <<CODE
|
||||||
|
import cnvpytor,os
|
||||||
|
binsizes = "${bins}".split(" ")
|
||||||
|
for binsize in binsizes:
|
||||||
|
file_list = "${input}".split(" ")
|
||||||
|
app = cnvpytor.Viewer(file_list, params={} )
|
||||||
|
outputfile = "{}_{}.{}".format("${prefix}",binsize.strip(),"${output_suffix}")
|
||||||
|
app.print_filename = outputfile
|
||||||
|
app.bin_size = int(binsize)
|
||||||
|
app.print_calls_file()
|
||||||
|
CODE
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/CNVpytor //' ))
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
def output_suffix = output_format ?: 'vcf'
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
touch ${prefix}.${output_suffix}
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
cnvpytor: \$(echo \$(cnvpytor --version 2>&1) | sed 's/CNVpytor //' ))
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
56
modules/cnvpytor/view/meta.yml
Normal file
56
modules/cnvpytor/view/meta.yml
Normal file
|
@ -0,0 +1,56 @@
|
||||||
|
name: cnvpytor_view
|
||||||
|
description: view function to generate vcfs
|
||||||
|
keywords:
|
||||||
|
- cnv calling
|
||||||
|
tools:
|
||||||
|
- cnvpytor:
|
||||||
|
description: calling CNVs using read depth
|
||||||
|
homepage: https://github.com/abyzovlab/CNVpytor
|
||||||
|
documentation: https://github.com/abyzovlab/CNVpytor
|
||||||
|
tool_dev_url: https://github.com/abyzovlab/CNVpytor
|
||||||
|
doi: "10.1101/2021.01.27.428472v1"
|
||||||
|
licence: ["MIT"]
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test' ]
|
||||||
|
- pytor_files:
|
||||||
|
type: file
|
||||||
|
description: pytor file containing cnv calls. To merge calls from multiple samples use a list of files.
|
||||||
|
pattern: "*.{pytor}"
|
||||||
|
- bin_sizes:
|
||||||
|
type: string
|
||||||
|
description: list of binsizes separated by space e.g. "1000 10000" and "1000"
|
||||||
|
- output_format:
|
||||||
|
type: string
|
||||||
|
description: output format of the cnv calls. Valid entries are "tsv", "vcf", and "xls"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test' ]
|
||||||
|
- tsv:
|
||||||
|
type: file
|
||||||
|
description: tsv file containing cnv calls
|
||||||
|
pattern: "*.{tsv}"
|
||||||
|
- vcf:
|
||||||
|
type: file
|
||||||
|
description: vcf file containing cnv calls
|
||||||
|
pattern: "*.{vcf}"
|
||||||
|
- xls:
|
||||||
|
type: file
|
||||||
|
description: xls file containing cnv calls
|
||||||
|
pattern: "*.{xls}"
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@sima-r"
|
||||||
|
- "@ramprasadn"
|
|
@ -21,7 +21,7 @@ process CONTROLFREEC_ASSESSSIGNIFICANCE {
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
"""
|
"""
|
||||||
cat /usr/local/bin/assess_significance.R | R --slave --args ${cnvs} ${ratio}
|
cat \$(which assess_significance.R) | R --slave --args ${cnvs} ${ratio}
|
||||||
|
|
||||||
mv *.p.value.txt ${prefix}.p.value.txt
|
mv *.p.value.txt ${prefix}.p.value.txt
|
||||||
|
|
||||||
|
@ -30,4 +30,15 @@ process CONTROLFREEC_ASSESSSIGNIFICANCE {
|
||||||
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
touch ${prefix}.p.value.txt
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
}
|
}
|
||||||
|
|
|
@ -21,7 +21,7 @@ process CONTROLFREEC_FREEC {
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path("*_ratio.BedGraph") , emit: bedgraph, optional: true
|
tuple val(meta), path("*_ratio.BedGraph") , emit: bedgraph, optional: true
|
||||||
tuple val(meta), path("*_control.cpn") , emit: control_cpn
|
tuple val(meta), path("*_control.cpn") , emit: control_cpn, optional: true
|
||||||
tuple val(meta), path("*_sample.cpn") , emit: sample_cpn
|
tuple val(meta), path("*_sample.cpn") , emit: sample_cpn
|
||||||
tuple val(meta), path("GC_profile.*.cpn") , emit: gcprofile_cpn, optional:true
|
tuple val(meta), path("GC_profile.*.cpn") , emit: gcprofile_cpn, optional:true
|
||||||
tuple val(meta), path("*_BAF.txt") , emit: BAF
|
tuple val(meta), path("*_BAF.txt") , emit: BAF
|
||||||
|
@ -155,4 +155,22 @@ process CONTROLFREEC_FREEC {
|
||||||
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
touch ${prefix}_ratio.BedGraph
|
||||||
|
touch ${prefix}_sample.cpn
|
||||||
|
touch GC_profile.${prefix}.cpn
|
||||||
|
touch ${prefix}_BAF.txt
|
||||||
|
touch ${prefix}_CNVs
|
||||||
|
touch ${prefix}_info.txt
|
||||||
|
touch ${prefix}_ratio.txt
|
||||||
|
touch config.txt
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
}
|
}
|
||||||
|
|
|
@ -28,4 +28,15 @@ process CONTROLFREEC_FREEC2BED {
|
||||||
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
touch ${prefix}.bed
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
}
|
}
|
||||||
|
|
|
@ -28,4 +28,15 @@ process CONTROLFREEC_FREEC2CIRCOS {
|
||||||
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
touch ${prefix}.circos.txt
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
}
|
}
|
||||||
|
|
|
@ -25,12 +25,24 @@ process CONTROLFREEC_MAKEGRAPH {
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
def baf = baf ?: ""
|
def baf = baf ?: ""
|
||||||
"""
|
"""
|
||||||
cat /usr/local/bin/makeGraph.R | R --slave --args ${args} ${ratio} ${baf}
|
cat \$(which makeGraph.R) | R --slave --args ${args} ${ratio} ${baf}
|
||||||
|
|
||||||
mv *_BAF.txt.png ${prefix}_BAF.png
|
mv *_BAF.txt.png ${prefix}_BAF.png
|
||||||
mv *_ratio.txt.log2.png ${prefix}_ratio.log2.png
|
mv *_ratio.txt.log2.png ${prefix}_ratio.log2.png
|
||||||
mv *_ratio.txt.png ${prefix}_ratio.png
|
mv *_ratio.txt.png ${prefix}_ratio.png
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
controlfreec: \$(echo \$(freec -version 2>&1) | sed 's/^.*Control-FREEC //; s/:.*\$//' | sed -e "s/Control-FREEC v//g" )
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
|
||||||
|
stub:
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
"""
|
||||||
|
touch ${prefix}_BAF.png
|
||||||
|
touch ${prefix}_ratio.log2.png
|
||||||
|
touch ${prefix}_ratio.png
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -2,10 +2,10 @@ process CUSTOM_GETCHROMSIZES {
|
||||||
tag "$fasta"
|
tag "$fasta"
|
||||||
label 'process_low'
|
label 'process_low'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::samtools=1.15" : null)
|
conda (params.enable_conda ? "bioconda::samtools=1.15.1" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/samtools:1.15--h1170115_1' :
|
'https://depot.galaxyproject.org/singularity/samtools:1.15.1--h1170115_0' :
|
||||||
'quay.io/biocontainers/samtools:1.15--h1170115_1' }"
|
'quay.io/biocontainers/samtools:1.15.1--h1170115_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
path fasta
|
path fasta
|
||||||
|
|
20
modules/custom/sratoolsncbisettings/main.nf
Normal file
20
modules/custom/sratoolsncbisettings/main.nf
Normal file
|
@ -0,0 +1,20 @@
|
||||||
|
process CUSTOM_SRATOOLSNCBISETTINGS {
|
||||||
|
tag 'ncbi-settings'
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? 'bioconda::sra-tools=2.11.0' : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/sra-tools:2.11.0--pl5321ha49a11a_3' :
|
||||||
|
'quay.io/biocontainers/sra-tools:2.11.0--pl5321ha49a11a_3' }"
|
||||||
|
|
||||||
|
output:
|
||||||
|
path('*.mkfg') , emit: ncbi_settings
|
||||||
|
path 'versions.yml', emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
shell:
|
||||||
|
config = "/LIBS/GUID = \"${UUID.randomUUID().toString()}\"\\n/libs/cloud/report_instance_identity = \"true\"\\n"
|
||||||
|
template 'detect_ncbi_settings.sh'
|
||||||
|
}
|
28
modules/custom/sratoolsncbisettings/meta.yml
Normal file
28
modules/custom/sratoolsncbisettings/meta.yml
Normal file
|
@ -0,0 +1,28 @@
|
||||||
|
name: "sratoolsncbisettings"
|
||||||
|
description: Test for the presence of suitable NCBI settings or create them on the fly.
|
||||||
|
keywords:
|
||||||
|
- NCBI
|
||||||
|
- settings
|
||||||
|
- sra-tools
|
||||||
|
- prefetch
|
||||||
|
- fasterq-dump
|
||||||
|
tools:
|
||||||
|
- "sratools":
|
||||||
|
description: "SRA Toolkit and SDK from NCBI"
|
||||||
|
homepage: https://github.com/ncbi/sra-tools
|
||||||
|
documentation: https://github.com/ncbi/sra-tools/wiki
|
||||||
|
tool_dev_url: https://github.com/ncbi/sra-tools
|
||||||
|
licence: "['Public Domain']"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- ncbi_settings:
|
||||||
|
type: file
|
||||||
|
description: An NCBI user settings file.
|
||||||
|
pattern: "*.mkfg"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@Midnighter"
|
|
@ -0,0 +1,45 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -u
|
||||||
|
|
||||||
|
|
||||||
|
# Get the expected NCBI settings path and define the environment variable
|
||||||
|
# `NCBI_SETTINGS`.
|
||||||
|
eval "$(vdb-config -o n NCBI_SETTINGS | sed 's/[" ]//g')"
|
||||||
|
|
||||||
|
# If the user settings do not exist yet, create a file suitable for `prefetch`
|
||||||
|
# and `fasterq-dump`. If an existing settings file does not contain the required
|
||||||
|
# values, error out with a helpful message.
|
||||||
|
if [[ ! -f "${NCBI_SETTINGS}" ]]; then
|
||||||
|
printf '!{config}' > 'user-settings.mkfg'
|
||||||
|
else
|
||||||
|
prefetch --help &> /dev/null
|
||||||
|
if [[ $? = 78 ]]; then
|
||||||
|
echo "You have an existing vdb-config at '${NCBI_SETTINGS}' but it is"\
|
||||||
|
"missing the required entries for /LIBS/GUID and"\
|
||||||
|
"/libs/cloud/report_instance_identity."\
|
||||||
|
"Feel free to add the following to your settings file:" >&2
|
||||||
|
echo "$(printf '!{config}')" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
fasterq-dump --help &> /dev/null
|
||||||
|
if [[ $? = 78 ]]; then
|
||||||
|
echo "You have an existing vdb-config at '${NCBI_SETTINGS}' but it is"\
|
||||||
|
"missing the required entries for /LIBS/GUID and"\
|
||||||
|
"/libs/cloud/report_instance_identity."\
|
||||||
|
"Feel free to add the following to your settings file:" >&2
|
||||||
|
echo "$(printf '!{config}')" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if [[ "${NCBI_SETTINGS}" != *.mkfg ]]; then
|
||||||
|
echo "The detected settings '${NCBI_SETTINGS}' do not have the required"\
|
||||||
|
"file extension '.mkfg'." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
cp "${NCBI_SETTINGS}" ./
|
||||||
|
fi
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"!{task.process}":
|
||||||
|
sratools: $(vdb-config --version 2>&1 | grep -Eo '[0-9.]+')
|
||||||
|
END_VERSIONS
|
|
@ -2,27 +2,28 @@ process DASTOOL_DASTOOL {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::das_tool=1.1.3" : null)
|
conda (params.enable_conda ? "bioconda::das_tool=1.1.4" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/das_tool:1.1.3--r41hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/das_tool:1.1.4--r41hdfd78af_1' :
|
||||||
'quay.io/biocontainers/das_tool:1.1.3--r41hdfd78af_0' }"
|
'quay.io/biocontainers/das_tool:1.1.4--r41hdfd78af_1' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(contigs), path(bins)
|
tuple val(meta), path(contigs), path(bins)
|
||||||
path(proteins)
|
path(proteins)
|
||||||
path(db_directory)
|
path(db_directory)
|
||||||
val(search_engine)
|
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path("*.log") , emit: log
|
tuple val(meta), path("*.log") , emit: log
|
||||||
tuple val(meta), path("*_summary.txt") , emit: summary
|
tuple val(meta), path("*_summary.tsv") , optional: true, emit: summary
|
||||||
tuple val(meta), path("*_DASTool_scaffolds2bin.txt") , emit: scaffolds2bin
|
tuple val(meta), path("*_DASTool_contig2bin.tsv") , optional: true, emit: contig2bin
|
||||||
tuple val(meta), path("*.eval") , optional: true, emit: eval
|
tuple val(meta), path("*.eval") , optional: true, emit: eval
|
||||||
tuple val(meta), path("*_DASTool_bins/*.fa") , optional: true, emit: bins
|
tuple val(meta), path("*_DASTool_bins/*.fa") , optional: true, emit: bins
|
||||||
tuple val(meta), path("*.pdf") , optional: true, emit: pdfs
|
tuple val(meta), path("*.pdf") , optional: true, emit: pdfs
|
||||||
tuple val(meta), path("*.proteins.faa") , optional: true, emit: fasta_proteins
|
tuple val(meta), path("*.candidates.faa") , optional: true, emit: fasta_proteins
|
||||||
|
tuple val(meta), path("*.faa") , optional: true, emit: candidates_faa
|
||||||
tuple val(meta), path("*.archaea.scg") , optional: true, emit: fasta_archaea_scg
|
tuple val(meta), path("*.archaea.scg") , optional: true, emit: fasta_archaea_scg
|
||||||
tuple val(meta), path("*.bacteria.scg") , optional: true, emit: fasta_bacteria_scg
|
tuple val(meta), path("*.bacteria.scg") , optional: true, emit: fasta_bacteria_scg
|
||||||
|
tuple val(meta), path("*.b6") , optional: true, emit: b6
|
||||||
tuple val(meta), path("*.seqlength") , optional: true, emit: seqlength
|
tuple val(meta), path("*.seqlength") , optional: true, emit: seqlength
|
||||||
path "versions.yml" , emit: versions
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
@ -33,17 +34,12 @@ process DASTOOL_DASTOOL {
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
def bin_list = bins instanceof List ? bins.join(",") : "$bins"
|
def bin_list = bins instanceof List ? bins.join(",") : "$bins"
|
||||||
def engine = search_engine ? "--search_engine $search_engine" : "--search_engine diamond"
|
|
||||||
def db_dir = db_directory ? "--db_directory $db_directory" : ""
|
def db_dir = db_directory ? "--db_directory $db_directory" : ""
|
||||||
def clean_contigs = contigs.toString() - ".gz"
|
def clean_contigs = contigs.toString() - ".gz"
|
||||||
def decompress_contigs = contigs.toString() == clean_contigs ? "" : "gunzip -q -f $contigs"
|
def decompress_contigs = contigs.toString() == clean_contigs ? "" : "gunzip -q -f $contigs"
|
||||||
def decompress_proteins = proteins ? "gunzip -f $proteins" : ""
|
|
||||||
def clean_proteins = proteins ? proteins.toString() - ".gz" : ""
|
def clean_proteins = proteins ? proteins.toString() - ".gz" : ""
|
||||||
def proteins_pred = proteins ? "--proteins $clean_proteins" : ""
|
def decompress_proteins = proteins ? "gunzip -f $proteins" : ""
|
||||||
|
def proteins_pred = proteins ? "-p $clean_proteins" : ""
|
||||||
if (! search_engine) {
|
|
||||||
log.info('[DAS_Tool] Default search engine (USEARCH) is proprietary software and not available in bioconda. Using DIAMOND as alternative.')
|
|
||||||
}
|
|
||||||
|
|
||||||
"""
|
"""
|
||||||
$decompress_proteins
|
$decompress_proteins
|
||||||
|
@ -53,15 +49,14 @@ process DASTOOL_DASTOOL {
|
||||||
$args \\
|
$args \\
|
||||||
$proteins_pred \\
|
$proteins_pred \\
|
||||||
$db_dir \\
|
$db_dir \\
|
||||||
$engine \\
|
|
||||||
-t $task.cpus \\
|
-t $task.cpus \\
|
||||||
--bins $bin_list \\
|
-i $bin_list \\
|
||||||
-c $clean_contigs \\
|
-c $clean_contigs \\
|
||||||
-o $prefix
|
-o $prefix
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
dastool: \$( DAS_Tool --version 2>&1 | grep "DAS Tool" | sed 's/DAS Tool version //' )
|
dastool: \$( DAS_Tool --version 2>&1 | grep "DAS Tool" | sed 's/DAS Tool //' )
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
}
|
}
|
||||||
|
|
|
@ -34,8 +34,8 @@ input:
|
||||||
pattern: "*.{fa.gz,fas.gz,fasta.gz}"
|
pattern: "*.{fa.gz,fas.gz,fasta.gz}"
|
||||||
- bins:
|
- bins:
|
||||||
type: file
|
type: file
|
||||||
description: "Scaffolds2bin tabular file generated with dastool/scaffolds2bin"
|
description: "FastaToContig2Bin tabular file generated with dastool/fastatocontig2bin"
|
||||||
pattern: "*.scaffolds2bin.tsv"
|
pattern: "*.tsv"
|
||||||
- proteins:
|
- proteins:
|
||||||
type: file
|
type: file
|
||||||
description: Predicted proteins in prodigal fasta format (>scaffoldID_geneNo)
|
description: Predicted proteins in prodigal fasta format (>scaffoldID_geneNo)
|
||||||
|
@ -43,9 +43,6 @@ input:
|
||||||
- db_directory:
|
- db_directory:
|
||||||
type: file
|
type: file
|
||||||
description: (optional) Directory of single copy gene database.
|
description: (optional) Directory of single copy gene database.
|
||||||
- search_engine:
|
|
||||||
type: val
|
|
||||||
description: Engine used for single copy gene identification. USEARCH is not supported due to it being proprietary [blast/diamond]
|
|
||||||
|
|
||||||
output:
|
output:
|
||||||
- meta:
|
- meta:
|
||||||
|
@ -65,14 +62,17 @@ output:
|
||||||
type: file
|
type: file
|
||||||
description: Summary of output bins including quality and completeness estimates
|
description: Summary of output bins including quality and completeness estimates
|
||||||
pattern: "*summary.txt"
|
pattern: "*summary.txt"
|
||||||
- scaffolds2bin:
|
- contig2bin:
|
||||||
type: file
|
type: file
|
||||||
description: Scaffolds to bin file of output bins
|
description: Scaffolds to bin file of output bins
|
||||||
pattern: "*.scaffolds2bin.txt"
|
pattern: "*.contig2bin.txt"
|
||||||
- eval:
|
- eval:
|
||||||
type: file
|
type: file
|
||||||
description: Quality and completeness estimates of input bin sets
|
description: Quality and completeness estimates of input bin sets
|
||||||
pattern: "*.eval"
|
pattern: "*.eval"
|
||||||
|
- bins:
|
||||||
|
description: Final refined bins in fasta format
|
||||||
|
pattern: "*.fa"
|
||||||
- pdfs:
|
- pdfs:
|
||||||
type: file
|
type: file
|
||||||
description: Plots showing the amount of high quality bins and score distribution of bins per method
|
description: Plots showing the amount of high quality bins and score distribution of bins per method
|
||||||
|
@ -89,6 +89,10 @@ output:
|
||||||
type: file
|
type: file
|
||||||
description: Results of bacterial single-copy-gene prediction
|
description: Results of bacterial single-copy-gene prediction
|
||||||
pattern: "*.bacteria.scg"
|
pattern: "*.bacteria.scg"
|
||||||
|
- b6:
|
||||||
|
type: file
|
||||||
|
description: Results in b6 format
|
||||||
|
pattern: "*.b6"
|
||||||
- seqlength:
|
- seqlength:
|
||||||
type: file
|
type: file
|
||||||
description: Summary of contig lengths
|
description: Summary of contig lengths
|
||||||
|
|
41
modules/dastool/fastatocontig2bin/main.nf
Normal file
41
modules/dastool/fastatocontig2bin/main.nf
Normal file
|
@ -0,0 +1,41 @@
|
||||||
|
process DASTOOL_FASTATOCONTIG2BIN {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::das_tool=1.1.4" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/das_tool:1.1.4--r41hdfd78af_1' :
|
||||||
|
'quay.io/biocontainers/das_tool:1.1.4--r41hdfd78af_1' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(fasta)
|
||||||
|
val(extension)
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("*.tsv"), emit: fastatocontig2bin
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def file_extension = extension ? extension : "fasta"
|
||||||
|
def clean_fasta = fasta.toString() - ".gz"
|
||||||
|
def decompress_fasta = fasta.toString() == clean_fasta ? "" : "gunzip -q -f $fasta"
|
||||||
|
"""
|
||||||
|
$decompress_fasta
|
||||||
|
|
||||||
|
Fasta_to_Contig2Bin.sh \\
|
||||||
|
$args \\
|
||||||
|
-i . \\
|
||||||
|
-e $file_extension \\
|
||||||
|
> ${prefix}.tsv
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
dastool: \$( DAS_Tool --version 2>&1 | grep "DAS Tool" | sed 's/DAS Tool //' )
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
56
modules/dastool/fastatocontig2bin/meta.yml
Normal file
56
modules/dastool/fastatocontig2bin/meta.yml
Normal file
|
@ -0,0 +1,56 @@
|
||||||
|
name: dastool_fastatocontig2bin
|
||||||
|
description: Helper script to convert a set of bins in fasta format to tabular scaffolds2bin format
|
||||||
|
keywords:
|
||||||
|
- binning
|
||||||
|
- das tool
|
||||||
|
- table
|
||||||
|
- de novo
|
||||||
|
- bins
|
||||||
|
- contigs
|
||||||
|
- assembly
|
||||||
|
- das_tool
|
||||||
|
tools:
|
||||||
|
- dastool:
|
||||||
|
description: |
|
||||||
|
DAS Tool is an automated method that integrates the results
|
||||||
|
of a flexible number of binning algorithms to calculate an optimized, non-redundant
|
||||||
|
set of bins from a single assembly.
|
||||||
|
|
||||||
|
homepage: https://github.com/cmks/DAS_Tool
|
||||||
|
documentation: https://github.com/cmks/DAS_Tool
|
||||||
|
tool_dev_url: https://github.com/cmks/DAS_Tool
|
||||||
|
doi: "10.1038/s41564-018-0171-1"
|
||||||
|
licence: ["BSD"]
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- fasta:
|
||||||
|
type: file
|
||||||
|
description: Fasta of list of fasta files recommended to be gathered via with .collect() of bins
|
||||||
|
pattern: "*.{fa,fa.gz,fas,fas.gz,fna,fna.gz,fasta,fasta.gz}"
|
||||||
|
- extension:
|
||||||
|
type: val
|
||||||
|
description: Fasta file extension (fa | fas | fasta | ...), without .gz suffix, if gzipped input.
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- fastatocontig2bin:
|
||||||
|
type: file
|
||||||
|
description: tabular contig2bin file for DAS tool input
|
||||||
|
pattern: "*.tsv"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@maxibor"
|
||||||
|
- "@jfy133"
|
|
@ -2,19 +2,25 @@ process DIAMOND_BLASTP {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
// Dimaond is limited to v2.0.9 because there is not a
|
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
|
||||||
// singularity version higher than this at the current time.
|
|
||||||
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
|
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
|
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
|
||||||
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
|
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(fasta)
|
tuple val(meta), path(fasta)
|
||||||
path db
|
path db
|
||||||
|
val out_ext
|
||||||
|
val blast_columns
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path('*.txt'), emit: txt
|
tuple val(meta), path('*.blast'), optional: true, emit: blast
|
||||||
|
tuple val(meta), path('*.xml') , optional: true, emit: xml
|
||||||
|
tuple val(meta), path('*.txt') , optional: true, emit: txt
|
||||||
|
tuple val(meta), path('*.daa') , optional: true, emit: daa
|
||||||
|
tuple val(meta), path('*.sam') , optional: true, emit: sam
|
||||||
|
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
|
||||||
|
tuple val(meta), path('*.paf') , optional: true, emit: paf
|
||||||
path "versions.yml" , emit: versions
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
when:
|
when:
|
||||||
|
@ -23,6 +29,21 @@ process DIAMOND_BLASTP {
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def columns = blast_columns ? "${blast_columns}" : ''
|
||||||
|
switch ( out_ext ) {
|
||||||
|
case "blast": outfmt = 0; break
|
||||||
|
case "xml": outfmt = 5; break
|
||||||
|
case "txt": outfmt = 6; break
|
||||||
|
case "daa": outfmt = 100; break
|
||||||
|
case "sam": outfmt = 101; break
|
||||||
|
case "tsv": outfmt = 102; break
|
||||||
|
case "paf": outfmt = 103; break
|
||||||
|
default:
|
||||||
|
outfmt = '6';
|
||||||
|
out_ext = 'txt';
|
||||||
|
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
|
||||||
|
break
|
||||||
|
}
|
||||||
"""
|
"""
|
||||||
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
|
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
|
||||||
|
|
||||||
|
@ -31,8 +52,9 @@ process DIAMOND_BLASTP {
|
||||||
--threads $task.cpus \\
|
--threads $task.cpus \\
|
||||||
--db \$DB \\
|
--db \$DB \\
|
||||||
--query $fasta \\
|
--query $fasta \\
|
||||||
|
--outfmt ${outfmt} ${columns} \\
|
||||||
$args \\
|
$args \\
|
||||||
--out ${prefix}.txt
|
--out ${prefix}.${out_ext}
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -28,12 +28,50 @@ input:
|
||||||
type: directory
|
type: directory
|
||||||
description: Directory containing the protein blast database
|
description: Directory containing the protein blast database
|
||||||
pattern: "*"
|
pattern: "*"
|
||||||
|
- out_ext:
|
||||||
|
type: string
|
||||||
|
description: |
|
||||||
|
Specify the type of output file to be generated. `blast` corresponds to
|
||||||
|
BLAST pairwise format. `xml` corresponds to BLAST xml format.
|
||||||
|
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
|
||||||
|
taxonomic classification format.
|
||||||
|
pattern: "blast|xml|txt|daa|sam|tsv|paf"
|
||||||
|
- blast_columns:
|
||||||
|
type: string
|
||||||
|
description: |
|
||||||
|
Optional space separated list of DIAMOND tabular BLAST output keywords
|
||||||
|
used for in conjunction with the 'txt' out_ext option (--outfmt 6). See
|
||||||
|
DIAMOND documnetation for more information.
|
||||||
|
|
||||||
output:
|
output:
|
||||||
- txt:
|
- blast:
|
||||||
type: file
|
type: file
|
||||||
description: File containing blastp hits
|
description: File containing blastp hits
|
||||||
pattern: "*.{blastp.txt}"
|
pattern: "*.{blast}"
|
||||||
|
- xml:
|
||||||
|
type: file
|
||||||
|
description: File containing blastp hits
|
||||||
|
pattern: "*.{xml}"
|
||||||
|
- txt:
|
||||||
|
type: file
|
||||||
|
description: File containing hits in tabular BLAST format.
|
||||||
|
pattern: "*.{txt}"
|
||||||
|
- daa:
|
||||||
|
type: file
|
||||||
|
description: File containing hits DAA format
|
||||||
|
pattern: "*.{daa}"
|
||||||
|
- sam:
|
||||||
|
type: file
|
||||||
|
description: File containing aligned reads in SAM format
|
||||||
|
pattern: "*.{sam}"
|
||||||
|
- tsv:
|
||||||
|
type: file
|
||||||
|
description: Tab separated file containing taxonomic classification of hits
|
||||||
|
pattern: "*.{tsv}"
|
||||||
|
- paf:
|
||||||
|
type: file
|
||||||
|
description: File containing aligned reads in pairwise mapping format format
|
||||||
|
pattern: "*.{paf}"
|
||||||
- versions:
|
- versions:
|
||||||
type: file
|
type: file
|
||||||
description: File containing software versions
|
description: File containing software versions
|
||||||
|
@ -41,3 +79,4 @@ output:
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@spficklin"
|
- "@spficklin"
|
||||||
|
- "@jfy133"
|
||||||
|
|
|
@ -2,19 +2,25 @@ process DIAMOND_BLASTX {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
// Dimaond is limited to v2.0.9 because there is not a
|
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
|
||||||
// singularity version higher than this at the current time.
|
|
||||||
conda (params.enable_conda ? "bioconda::diamond=2.0.9" : null)
|
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
|
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
|
||||||
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
|
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(fasta)
|
tuple val(meta), path(fasta)
|
||||||
path db
|
path db
|
||||||
|
val out_ext
|
||||||
|
val blast_columns
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path('*.txt'), emit: txt
|
tuple val(meta), path('*.blast'), optional: true, emit: blast
|
||||||
|
tuple val(meta), path('*.xml') , optional: true, emit: xml
|
||||||
|
tuple val(meta), path('*.txt') , optional: true, emit: txt
|
||||||
|
tuple val(meta), path('*.daa') , optional: true, emit: daa
|
||||||
|
tuple val(meta), path('*.sam') , optional: true, emit: sam
|
||||||
|
tuple val(meta), path('*.tsv') , optional: true, emit: tsv
|
||||||
|
tuple val(meta), path('*.paf') , optional: true, emit: paf
|
||||||
path "versions.yml" , emit: versions
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
when:
|
when:
|
||||||
|
@ -23,6 +29,21 @@ process DIAMOND_BLASTX {
|
||||||
script:
|
script:
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def columns = blast_columns ? "${blast_columns}" : ''
|
||||||
|
switch ( out_ext ) {
|
||||||
|
case "blast": outfmt = 0; break
|
||||||
|
case "xml": outfmt = 5; break
|
||||||
|
case "txt": outfmt = 6; break
|
||||||
|
case "daa": outfmt = 100; break
|
||||||
|
case "sam": outfmt = 101; break
|
||||||
|
case "tsv": outfmt = 102; break
|
||||||
|
case "paf": outfmt = 103; break
|
||||||
|
default:
|
||||||
|
outfmt = '6';
|
||||||
|
out_ext = 'txt';
|
||||||
|
log.warn("Unknown output file format provided (${out_ext}): selecting DIAMOND default of tabular BLAST output (txt)");
|
||||||
|
break
|
||||||
|
}
|
||||||
"""
|
"""
|
||||||
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
|
DB=`find -L ./ -name "*.dmnd" | sed 's/.dmnd//'`
|
||||||
|
|
||||||
|
@ -31,8 +52,9 @@ process DIAMOND_BLASTX {
|
||||||
--threads $task.cpus \\
|
--threads $task.cpus \\
|
||||||
--db \$DB \\
|
--db \$DB \\
|
||||||
--query $fasta \\
|
--query $fasta \\
|
||||||
|
--outfmt ${outfmt} ${columns} \\
|
||||||
$args \\
|
$args \\
|
||||||
--out ${prefix}.txt
|
--out ${prefix}.${out_ext}
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
cat <<-END_VERSIONS > versions.yml
|
||||||
"${task.process}":
|
"${task.process}":
|
||||||
|
|
|
@ -28,12 +28,44 @@ input:
|
||||||
type: directory
|
type: directory
|
||||||
description: Directory containing the nucelotide blast database
|
description: Directory containing the nucelotide blast database
|
||||||
pattern: "*"
|
pattern: "*"
|
||||||
|
- out_ext:
|
||||||
|
type: string
|
||||||
|
description: |
|
||||||
|
Specify the type of output file to be generated. `blast` corresponds to
|
||||||
|
BLAST pairwise format. `xml` corresponds to BLAST xml format.
|
||||||
|
`txt` corresponds to to BLAST tabular format. `tsv` corresponds to
|
||||||
|
taxonomic classification format.
|
||||||
|
pattern: "blast|xml|txt|daa|sam|tsv|paf"
|
||||||
|
|
||||||
output:
|
output:
|
||||||
|
- blast:
|
||||||
|
type: file
|
||||||
|
description: File containing blastp hits
|
||||||
|
pattern: "*.{blast}"
|
||||||
|
- xml:
|
||||||
|
type: file
|
||||||
|
description: File containing blastp hits
|
||||||
|
pattern: "*.{xml}"
|
||||||
- txt:
|
- txt:
|
||||||
type: file
|
type: file
|
||||||
description: File containing blastx hits
|
description: File containing hits in tabular BLAST format.
|
||||||
pattern: "*.{blastx.txt}"
|
pattern: "*.{txt}"
|
||||||
|
- daa:
|
||||||
|
type: file
|
||||||
|
description: File containing hits DAA format
|
||||||
|
pattern: "*.{daa}"
|
||||||
|
- sam:
|
||||||
|
type: file
|
||||||
|
description: File containing aligned reads in SAM format
|
||||||
|
pattern: "*.{sam}"
|
||||||
|
- tsv:
|
||||||
|
type: file
|
||||||
|
description: Tab separated file containing taxonomic classification of hits
|
||||||
|
pattern: "*.{tsv}"
|
||||||
|
- paf:
|
||||||
|
type: file
|
||||||
|
description: File containing aligned reads in pairwise mapping format format
|
||||||
|
pattern: "*.{paf}"
|
||||||
- versions:
|
- versions:
|
||||||
type: file
|
type: file
|
||||||
description: File containing software versions
|
description: File containing software versions
|
||||||
|
@ -41,3 +73,4 @@ output:
|
||||||
|
|
||||||
authors:
|
authors:
|
||||||
- "@spficklin"
|
- "@spficklin"
|
||||||
|
- "@jfy133"
|
||||||
|
|
|
@ -2,12 +2,10 @@ process DIAMOND_MAKEDB {
|
||||||
tag "$fasta"
|
tag "$fasta"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
// Dimaond is limited to v2.0.9 because there is not a
|
conda (params.enable_conda ? "bioconda::diamond=2.0.15" : null)
|
||||||
// singularity version higher than this at the current time.
|
|
||||||
conda (params.enable_conda ? 'bioconda::diamond=2.0.9' : null)
|
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/diamond:2.0.9--hdcc8f71_0' :
|
'https://depot.galaxyproject.org/singularity/diamond:2.0.15--hb97b32f_0' :
|
||||||
'quay.io/biocontainers/diamond:2.0.9--hdcc8f71_0' }"
|
'quay.io/biocontainers/diamond:2.0.15--hb97b32f_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
path fasta
|
path fasta
|
||||||
|
|
|
@ -2,10 +2,10 @@ process DRAGMAP_ALIGN {
|
||||||
tag "$meta.id"
|
tag "$meta.id"
|
||||||
label 'process_high'
|
label 'process_high'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::dragmap=1.2.1 bioconda::samtools=1.14 conda-forge::pigz=2.3.4" : null)
|
conda (params.enable_conda ? "bioconda::dragmap=1.2.1 bioconda::samtools=1.15.1 conda-forge::pigz=2.3.4" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/mulled-v2-580d344d9d4a496cd403932da8765f9e0187774d:f7aad9060cde739c95685fc5ff6d6f7e3ec629c8-0':
|
'https://depot.galaxyproject.org/singularity/mulled-v2-580d344d9d4a496cd403932da8765f9e0187774d:5ebebbc128cd624282eaa37d2c7fe01505a91a69-0':
|
||||||
'quay.io/biocontainers/mulled-v2-580d344d9d4a496cd403932da8765f9e0187774d:f7aad9060cde739c95685fc5ff6d6f7e3ec629c8-0' }"
|
'quay.io/biocontainers/mulled-v2-580d344d9d4a496cd403932da8765f9e0187774d:5ebebbc128cd624282eaa37d2c7fe01505a91a69-0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(reads)
|
tuple val(meta), path(reads)
|
||||||
|
@ -24,16 +24,15 @@ process DRAGMAP_ALIGN {
|
||||||
def args = task.ext.args ?: ''
|
def args = task.ext.args ?: ''
|
||||||
def args2 = task.ext.args2 ?: ''
|
def args2 = task.ext.args2 ?: ''
|
||||||
def prefix = task.ext.prefix ?: "${meta.id}"
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
def read_group = meta.read_group ? "--RGSM ${meta.read_group}" : ""
|
def reads_command = meta.single_end ? "-1 $reads" : "-1 ${reads[0]} -2 ${reads[1]}"
|
||||||
def samtools_command = sort_bam ? 'sort' : 'view'
|
def samtools_command = sort_bam ? 'sort' : 'view'
|
||||||
if (meta.single_end) {
|
|
||||||
"""
|
"""
|
||||||
dragen-os \\
|
dragen-os \\
|
||||||
-r $hashmap \\
|
-r $hashmap \\
|
||||||
$args \\
|
$args \\
|
||||||
$read_group \\
|
|
||||||
--num-threads $task.cpus \\
|
--num-threads $task.cpus \\
|
||||||
-1 $reads \\
|
$reads_command \\
|
||||||
2> ${prefix}.dragmap.log \\
|
2> ${prefix}.dragmap.log \\
|
||||||
| samtools $samtools_command $args2 --threads $task.cpus -o ${prefix}.bam -
|
| samtools $samtools_command $args2 --threads $task.cpus -o ${prefix}.bam -
|
||||||
|
|
||||||
|
@ -44,24 +43,4 @@ process DRAGMAP_ALIGN {
|
||||||
pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
|
pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
|
||||||
END_VERSIONS
|
END_VERSIONS
|
||||||
"""
|
"""
|
||||||
} else {
|
|
||||||
"""
|
|
||||||
dragen-os \\
|
|
||||||
-r $hashmap \\
|
|
||||||
$args \\
|
|
||||||
$read_group \\
|
|
||||||
--num-threads $task.cpus \\
|
|
||||||
-1 ${reads[0]} \\
|
|
||||||
-2 ${reads[1]} \\
|
|
||||||
2> ${prefix}.dragmap.log \\
|
|
||||||
| samtools $samtools_command $args2 --threads $task.cpus -o ${prefix}.bam -
|
|
||||||
|
|
||||||
cat <<-END_VERSIONS > versions.yml
|
|
||||||
"${task.process}":
|
|
||||||
dragmap: \$(echo \$(dragen-os --version 2>&1))
|
|
||||||
samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
|
|
||||||
pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
|
|
||||||
END_VERSIONS
|
|
||||||
"""
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,10 +2,10 @@ process DSHBIO_EXPORTSEGMENTS {
|
||||||
tag "${meta.id}"
|
tag "${meta.id}"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.7" : null)
|
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.8" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/dsh-bio:2.0.7--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/dsh-bio:2.0.8--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/dsh-bio:2.0.7--hdfd78af_0' }"
|
'quay.io/biocontainers/dsh-bio:2.0.8--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(gfa)
|
tuple val(meta), path(gfa)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process DSHBIO_FILTERBED {
|
||||||
tag "${meta.id}"
|
tag "${meta.id}"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.7" : null)
|
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.8" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/dsh-bio:2.0.7--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/dsh-bio:2.0.8--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/dsh-bio:2.0.7--hdfd78af_0' }"
|
'quay.io/biocontainers/dsh-bio:2.0.8--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bed)
|
tuple val(meta), path(bed)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process DSHBIO_FILTERGFF3 {
|
||||||
tag "${meta.id}"
|
tag "${meta.id}"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.7" : null)
|
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.8" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/dsh-bio:2.0.7--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/dsh-bio:2.0.8--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/dsh-bio:2.0.7--hdfd78af_0' }"
|
'quay.io/biocontainers/dsh-bio:2.0.8--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(gff3)
|
tuple val(meta), path(gff3)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process DSHBIO_SPLITBED {
|
||||||
tag "${meta.id}"
|
tag "${meta.id}"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.7" : null)
|
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.8" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/dsh-bio:2.0.7--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/dsh-bio:2.0.8--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/dsh-bio:2.0.7--hdfd78af_0' }"
|
'quay.io/biocontainers/dsh-bio:2.0.8--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(bed)
|
tuple val(meta), path(bed)
|
||||||
|
|
|
@ -2,10 +2,10 @@ process DSHBIO_SPLITGFF3 {
|
||||||
tag "${meta.id}"
|
tag "${meta.id}"
|
||||||
label 'process_medium'
|
label 'process_medium'
|
||||||
|
|
||||||
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.7" : null)
|
conda (params.enable_conda ? "bioconda::dsh-bio=2.0.8" : null)
|
||||||
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
'https://depot.galaxyproject.org/singularity/dsh-bio:2.0.7--hdfd78af_0' :
|
'https://depot.galaxyproject.org/singularity/dsh-bio:2.0.8--hdfd78af_0' :
|
||||||
'quay.io/biocontainers/dsh-bio:2.0.7--hdfd78af_0' }"
|
'quay.io/biocontainers/dsh-bio:2.0.8--hdfd78af_0' }"
|
||||||
|
|
||||||
input:
|
input:
|
||||||
tuple val(meta), path(gff3)
|
tuple val(meta), path(gff3)
|
||||||
|
|
89
modules/elprep/filter/main.nf
Normal file
89
modules/elprep/filter/main.nf
Normal file
|
@ -0,0 +1,89 @@
|
||||||
|
process ELPREP_FILTER {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_high'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::elprep=5.1.2" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/elprep:5.1.2--he881be0_0':
|
||||||
|
'quay.io/biocontainers/elprep:5.1.2--he881be0_0' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(bam)
|
||||||
|
val(run_haplotypecaller)
|
||||||
|
val(run_bqsr)
|
||||||
|
path(reference_sequences)
|
||||||
|
path(filter_regions_bed)
|
||||||
|
path(reference_elfasta)
|
||||||
|
path(known_sites_elsites)
|
||||||
|
path(target_regions_bed)
|
||||||
|
path(intermediate_bqsr_tables)
|
||||||
|
val(bqsr_tables_only)
|
||||||
|
val(get_activity_profile)
|
||||||
|
val(get_assembly_regions)
|
||||||
|
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("output/**.{bam,sam}") ,emit: bam
|
||||||
|
tuple val(meta), path("*.metrics.txt") ,optional: true, emit: metrics
|
||||||
|
tuple val(meta), path("*.recall") ,optional: true, emit: recall
|
||||||
|
tuple val(meta), path("*.vcf.gz") ,optional: true, emit: gvcf
|
||||||
|
tuple val(meta), path("*.table") ,optional: true, emit: table
|
||||||
|
tuple val(meta), path("*.activity_profile.igv") ,optional: true, emit: activity_profile
|
||||||
|
tuple val(meta), path("*.assembly_regions.igv") ,optional: true, emit: assembly_regions
|
||||||
|
path "versions.yml" ,emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def suffix = args.contains("--output-type sam") ? "sam" : "bam"
|
||||||
|
|
||||||
|
// filter args
|
||||||
|
def reference_sequences_cmd = reference_sequences ? " --replace-reference-sequences ${reference_sequences}" : ""
|
||||||
|
def filter_regions_cmd = filter_regions_bed ? " --filter-non-overlapping-reads ${filter_regions_bed}" : ""
|
||||||
|
|
||||||
|
// markdup args
|
||||||
|
def markdup_cmd = args.contains("--mark-duplicates") ? " --mark-optical-duplicates ${prefix}.metrics.txt": ""
|
||||||
|
|
||||||
|
// variant calling args
|
||||||
|
def haplotyper_cmd = run_haplotypecaller ? " --haplotypecaller ${prefix}.g.vcf.gz": ""
|
||||||
|
|
||||||
|
def fasta_cmd = reference_elfasta ? " --reference ${reference_elfasta}": ""
|
||||||
|
def known_sites_cmd = known_sites_elsites ? " --known-sites ${known_sites_elsites}": ""
|
||||||
|
def target_regions_cmd = target_regions_bed ? " --target-regions ${target_regions_bed}": ""
|
||||||
|
|
||||||
|
// bqsr args
|
||||||
|
def bqsr_cmd = run_bqsr ? " --bqsr ${prefix}.recall": ""
|
||||||
|
def bqsr_tables_only_cmd = bqsr_tables_only ? " --bqsr-tables-only ${prefix}.table": ""
|
||||||
|
|
||||||
|
def intermediate_bqsr_cmd = intermediate_bqsr_tables ? " --bqsr-apply .": ""
|
||||||
|
|
||||||
|
// misc
|
||||||
|
def activity_profile_cmd = get_activity_profile ? " --activity-profile ${prefix}.activity_profile.igv": ""
|
||||||
|
def assembly_regions_cmd = get_assembly_regions ? " --assembly-regions ${prefix}.assembly_regions.igv": ""
|
||||||
|
|
||||||
|
"""
|
||||||
|
elprep filter ${bam} output/${prefix}.${suffix} \\
|
||||||
|
${reference_sequences_cmd} \\
|
||||||
|
${filter_regions_cmd} \\
|
||||||
|
${markdup_cmd} \\
|
||||||
|
${haplotyper_cmd} \\
|
||||||
|
${fasta_cmd} \\
|
||||||
|
${known_sites_cmd} \\
|
||||||
|
${target_regions_cmd} \\
|
||||||
|
${bqsr_cmd} \\
|
||||||
|
${bqsr_tables_only_cmd} \\
|
||||||
|
${intermediate_bqsr_cmd} \\
|
||||||
|
${activity_profile_cmd} \\
|
||||||
|
${assembly_regions_cmd} \\
|
||||||
|
--nr-of-threads ${task.cpus} \\
|
||||||
|
$args
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
elprep: \$(elprep 2>&1 | head -n2 | tail -n1 |sed 's/^.*version //;s/ compiled.*\$//')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
106
modules/elprep/filter/meta.yml
Normal file
106
modules/elprep/filter/meta.yml
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
name: "elprep_filter"
|
||||||
|
description: "Filter, sort and markdup sam/bam files, with optional BQSR and variant calling."
|
||||||
|
keywords:
|
||||||
|
- sort
|
||||||
|
- bam
|
||||||
|
- sam
|
||||||
|
- filter
|
||||||
|
- variant calling
|
||||||
|
tools:
|
||||||
|
- "elprep":
|
||||||
|
description: "elPrep is a high-performance tool for preparing .sam/.bam files for variant calling in sequencing pipelines. It can be used as a drop-in replacement for SAMtools/Picard/GATK4."
|
||||||
|
homepage: "https://github.com/ExaScience/elprep"
|
||||||
|
documentation: "https://github.com/ExaScience/elprep"
|
||||||
|
tool_dev_url: "https://github.com/ExaScience/elprep"
|
||||||
|
doi: "10.1371/journal.pone.0244471"
|
||||||
|
licence: "['AGPL v3']"
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: Input SAM/BAM file
|
||||||
|
pattern: "*.{bam,sam}"
|
||||||
|
- run_haplotypecaller:
|
||||||
|
type: boolean
|
||||||
|
description: Run variant calling on the input files. Needed to generate gvcf output.
|
||||||
|
- run_bqsr:
|
||||||
|
type: boolean
|
||||||
|
description: Run BQSR on the input files. Needed to generate recall metrics.
|
||||||
|
- reference_sequences:
|
||||||
|
type: file
|
||||||
|
description: Optional SAM header to replace existing header.
|
||||||
|
pattern: "*.sam"
|
||||||
|
- filter_regions_bed:
|
||||||
|
type: file
|
||||||
|
description: Optional BED file containing regions to filter.
|
||||||
|
pattern: "*.bed"
|
||||||
|
- reference_elfasta:
|
||||||
|
type: file
|
||||||
|
description: Elfasta file, required for BQSR and variant calling.
|
||||||
|
pattern: "*.elfasta"
|
||||||
|
- known_sites:
|
||||||
|
type: file
|
||||||
|
description: Optional elsites file containing known SNPs for BQSR.
|
||||||
|
pattern: "*.elsites"
|
||||||
|
- target_regions_bed:
|
||||||
|
type: file
|
||||||
|
description: Optional BED file containing target regions for BQSR and variant calling.
|
||||||
|
pattern: "*.bed"
|
||||||
|
- intermediate_bqsr_tables:
|
||||||
|
type: file
|
||||||
|
description: Optional list of BQSR tables, used when parsing files created by `elprep split`
|
||||||
|
pattern: "*.table"
|
||||||
|
- bqsr_tables_only:
|
||||||
|
type: boolean
|
||||||
|
description: Write intermediate BQSR tables, used when parsing files created by `elprep split`.
|
||||||
|
- get_activity_profile:
|
||||||
|
type: boolean
|
||||||
|
description: Get the activity profile calculated by the haplotypecaller to the given file in IGV format.
|
||||||
|
- get_assembly_regions:
|
||||||
|
type: boolean
|
||||||
|
description: Get the assembly regions calculated by haplotypecaller to the speficied file in IGV format.
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: Sorted, markdup, optionally BQSR BAM/SAM file
|
||||||
|
pattern: "*.{bam,sam}"
|
||||||
|
- metrics:
|
||||||
|
type: file
|
||||||
|
description: Optional duplicate metrics file generated by elprep
|
||||||
|
pattern: "*.{metrics.txt}"
|
||||||
|
- recall:
|
||||||
|
type: file
|
||||||
|
description: Optional recall metrics file generated by elprep
|
||||||
|
pattern: "*.{recall}"
|
||||||
|
- gvcf:
|
||||||
|
type: file
|
||||||
|
description: Optional GVCF output file
|
||||||
|
pattern: "*.{vcf.gz}"
|
||||||
|
- table:
|
||||||
|
type: file
|
||||||
|
description: Optional intermediate BQSR table output file
|
||||||
|
pattern: "*.{table}"
|
||||||
|
- activity_profile:
|
||||||
|
type: file
|
||||||
|
description: Optional activity profile output file
|
||||||
|
pattern: "*.{activity_profile.igv}"
|
||||||
|
- assembly_regions:
|
||||||
|
type: file
|
||||||
|
description: Optional activity regions output file
|
||||||
|
pattern: "*.{assembly_regions.igv}"
|
||||||
|
authors:
|
||||||
|
- "@matthdsm"
|
43
modules/elprep/merge/main.nf
Normal file
43
modules/elprep/merge/main.nf
Normal file
|
@ -0,0 +1,43 @@
|
||||||
|
process ELPREP_MERGE {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::elprep=5.1.2" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/elprep:5.1.2--he881be0_0':
|
||||||
|
'quay.io/biocontainers/elprep:5.1.2--he881be0_0' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(bam)
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("output/**.{bam,sam}") , emit: bam
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def suffix = args.contains("--output-type sam") ? "sam" : "bam"
|
||||||
|
def single_end = meta.single_end ? " --single-end" : ""
|
||||||
|
|
||||||
|
"""
|
||||||
|
# create directory and move all input so elprep can find and merge them before splitting
|
||||||
|
mkdir input
|
||||||
|
mv ${bam} input/
|
||||||
|
|
||||||
|
elprep merge \\
|
||||||
|
input/ \\
|
||||||
|
output/${prefix}.${suffix} \\
|
||||||
|
$args \\
|
||||||
|
${single_end} \\
|
||||||
|
--nr-of-threads $task.cpus
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
elprep: \$(elprep 2>&1 | head -n2 | tail -n1 |sed 's/^.*version //;s/ compiled.*\$//')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
44
modules/elprep/merge/meta.yml
Normal file
44
modules/elprep/merge/meta.yml
Normal file
|
@ -0,0 +1,44 @@
|
||||||
|
name: "elprep_merge"
|
||||||
|
description: Merge split bam/sam chunks in one file
|
||||||
|
keywords:
|
||||||
|
- bam
|
||||||
|
- sam
|
||||||
|
- merge
|
||||||
|
tools:
|
||||||
|
- "elprep":
|
||||||
|
description: "elPrep is a high-performance tool for preparing .sam/.bam files for variant calling in sequencing pipelines. It can be used as a drop-in replacement for SAMtools/Picard/GATK4."
|
||||||
|
homepage: "https://github.com/ExaScience/elprep"
|
||||||
|
documentation: "https://github.com/ExaScience/elprep"
|
||||||
|
tool_dev_url: "https://github.com/ExaScience/elprep"
|
||||||
|
doi: "10.1371/journal.pone.0244471"
|
||||||
|
licence: "['AGPL v3']"
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: List of BAM/SAM chunks to merge
|
||||||
|
pattern: "*.{bam,sam}"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
#
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: Merged BAM/SAM file
|
||||||
|
pattern: "*.{bam,sam}"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@matthdsm"
|
45
modules/elprep/split/main.nf
Normal file
45
modules/elprep/split/main.nf
Normal file
|
@ -0,0 +1,45 @@
|
||||||
|
process ELPREP_SPLIT {
|
||||||
|
tag "$meta.id"
|
||||||
|
label 'process_low'
|
||||||
|
|
||||||
|
conda (params.enable_conda ? "bioconda::elprep=5.1.2" : null)
|
||||||
|
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
|
||||||
|
'https://depot.galaxyproject.org/singularity/elprep:5.1.2--he881be0_0':
|
||||||
|
'quay.io/biocontainers/elprep:5.1.2--he881be0_0' }"
|
||||||
|
|
||||||
|
input:
|
||||||
|
tuple val(meta), path(bam)
|
||||||
|
|
||||||
|
output:
|
||||||
|
tuple val(meta), path("output/**.{bam,sam}"), emit: bam
|
||||||
|
path "versions.yml" , emit: versions
|
||||||
|
|
||||||
|
when:
|
||||||
|
task.ext.when == null || task.ext.when
|
||||||
|
|
||||||
|
script:
|
||||||
|
def args = task.ext.args ?: ''
|
||||||
|
def prefix = task.ext.prefix ?: "${meta.id}"
|
||||||
|
def single_end = meta.single_end ? " --single-end": ""
|
||||||
|
|
||||||
|
"""
|
||||||
|
# create directory and move all input so elprep can find and merge them before splitting
|
||||||
|
mkdir input
|
||||||
|
mv ${bam} input/
|
||||||
|
|
||||||
|
mkdir ${prefix}
|
||||||
|
|
||||||
|
elprep split \\
|
||||||
|
input \\
|
||||||
|
output/ \\
|
||||||
|
$args \\
|
||||||
|
$single_end \\
|
||||||
|
--nr-of-threads $task.cpus \\
|
||||||
|
--output-prefix $prefix
|
||||||
|
|
||||||
|
cat <<-END_VERSIONS > versions.yml
|
||||||
|
"${task.process}":
|
||||||
|
elprep: \$(elprep 2>&1 | head -n2 | tail -n1 |sed 's/^.*version //;s/ compiled.*\$//')
|
||||||
|
END_VERSIONS
|
||||||
|
"""
|
||||||
|
}
|
43
modules/elprep/split/meta.yml
Normal file
43
modules/elprep/split/meta.yml
Normal file
|
@ -0,0 +1,43 @@
|
||||||
|
name: "elprep_split"
|
||||||
|
description: Split bam file into manageable chunks
|
||||||
|
keywords:
|
||||||
|
- bam
|
||||||
|
- split by chromosome
|
||||||
|
tools:
|
||||||
|
- "elprep":
|
||||||
|
description: "elPrep is a high-performance tool for preparing .sam/.bam files for variant calling in sequencing pipelines. It can be used as a drop-in replacement for SAMtools/Picard/GATK4."
|
||||||
|
homepage: "https://github.com/ExaScience/elprep"
|
||||||
|
documentation: "https://github.com/ExaScience/elprep"
|
||||||
|
tool_dev_url: "https://github.com/ExaScience/elprep"
|
||||||
|
doi: "10.1371"
|
||||||
|
licence: "['AGPL v3']"
|
||||||
|
|
||||||
|
input:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: List of BAM/SAM files
|
||||||
|
pattern: "*.{bam,sam}"
|
||||||
|
|
||||||
|
output:
|
||||||
|
- meta:
|
||||||
|
type: map
|
||||||
|
description: |
|
||||||
|
Groovy Map containing sample information
|
||||||
|
e.g. [ id:'test', single_end:false ]
|
||||||
|
#
|
||||||
|
- versions:
|
||||||
|
type: file
|
||||||
|
description: File containing software versions
|
||||||
|
pattern: "versions.yml"
|
||||||
|
- bam:
|
||||||
|
type: file
|
||||||
|
description: List of split BAM/SAM files
|
||||||
|
pattern: "*.{bam,sam}"
|
||||||
|
|
||||||
|
authors:
|
||||||
|
- "@matthdsm"
|
|
@ -8,13 +8,14 @@ LABEL \
|
||||||
COPY environment.yml /
|
COPY environment.yml /
|
||||||
RUN conda env create -f /environment.yml && conda clean -a
|
RUN conda env create -f /environment.yml && conda clean -a
|
||||||
|
|
||||||
# Add conda installation dir to PATH (instead of doing 'conda activate')
|
|
||||||
ENV PATH /opt/conda/envs/nf-core-vep-104.3/bin:$PATH
|
|
||||||
|
|
||||||
# Setup default ARG variables
|
# Setup default ARG variables
|
||||||
ARG GENOME=GRCh38
|
ARG GENOME=GRCh38
|
||||||
ARG SPECIES=homo_sapiens
|
ARG SPECIES=homo_sapiens
|
||||||
ARG VEP_VERSION=99
|
ARG VEP_VERSION=104
|
||||||
|
ARG VEP_TAG=104.3
|
||||||
|
|
||||||
|
# Add conda installation dir to PATH (instead of doing 'conda activate')
|
||||||
|
ENV PATH /opt/conda/envs/nf-core-vep-${VEP_TAG}/bin:$PATH
|
||||||
|
|
||||||
# Download Genome
|
# Download Genome
|
||||||
RUN vep_install \
|
RUN vep_install \
|
||||||
|
@ -27,4 +28,4 @@ RUN vep_install \
|
||||||
--NO_BIOPERL --NO_HTSLIB --NO_TEST --NO_UPDATE
|
--NO_BIOPERL --NO_HTSLIB --NO_TEST --NO_UPDATE
|
||||||
|
|
||||||
# Dump the details of the installed packages to a file for posterity
|
# Dump the details of the installed packages to a file for posterity
|
||||||
RUN conda env export --name nf-core-vep-104.3 > nf-core-vep-104.3.yml
|
RUN conda env export --name nf-core-vep-${VEP_TAG} > nf-core-vep-${VEP_TAG}.yml
|
||||||
|
|
|
@ -10,11 +10,12 @@ build_push() {
|
||||||
VEP_TAG=$4
|
VEP_TAG=$4
|
||||||
|
|
||||||
docker build \
|
docker build \
|
||||||
|
. \
|
||||||
-t nfcore/vep:${VEP_TAG}.${GENOME} \
|
-t nfcore/vep:${VEP_TAG}.${GENOME} \
|
||||||
software/vep/. \
|
|
||||||
--build-arg GENOME=${GENOME} \
|
--build-arg GENOME=${GENOME} \
|
||||||
--build-arg SPECIES=${SPECIES} \
|
--build-arg SPECIES=${SPECIES} \
|
||||||
--build-arg VEP_VERSION=${VEP_VERSION}
|
--build-arg VEP_VERSION=${VEP_VERSION} \
|
||||||
|
--build-arg VEP_TAG=${VEP_TAG}
|
||||||
|
|
||||||
docker push nfcore/vep:${VEP_TAG}.${GENOME}
|
docker push nfcore/vep:${VEP_TAG}.${GENOME}
|
||||||
}
|
}
|
||||||
|
|
|
@ -13,6 +13,7 @@ process ENSEMBLVEP {
|
||||||
val species
|
val species
|
||||||
val cache_version
|
val cache_version
|
||||||
path cache
|
path cache
|
||||||
|
path extra_files
|
||||||
|
|
||||||
output:
|
output:
|
||||||
tuple val(meta), path("*.ann.vcf"), emit: vcf
|
tuple val(meta), path("*.ann.vcf"), emit: vcf
|
||||||
|
|
|
@ -10,17 +10,6 @@ tools:
|
||||||
homepage: https://www.ensembl.org/info/docs/tools/vep/index.html
|
homepage: https://www.ensembl.org/info/docs/tools/vep/index.html
|
||||||
documentation: https://www.ensembl.org/info/docs/tools/vep/script/index.html
|
documentation: https://www.ensembl.org/info/docs/tools/vep/script/index.html
|
||||||
licence: ["Apache-2.0"]
|
licence: ["Apache-2.0"]
|
||||||
params:
|
|
||||||
- use_cache:
|
|
||||||
type: boolean
|
|
||||||
description: |
|
|
||||||
Enable the usage of containers with cache
|
|
||||||
Does not work with conda
|
|
||||||
- vep_tag:
|
|
||||||
type: value
|
|
||||||
description: |
|
|
||||||
Specify the tag for the container
|
|
||||||
https://hub.docker.com/r/nfcore/vep/tags
|
|
||||||
input:
|
input:
|
||||||
- meta:
|
- meta:
|
||||||
type: map
|
type: map
|
||||||
|
@ -47,6 +36,10 @@ input:
|
||||||
type: file
|
type: file
|
||||||
description: |
|
description: |
|
||||||
path to VEP cache (optional)
|
path to VEP cache (optional)
|
||||||
|
- extra_files:
|
||||||
|
type: tuple
|
||||||
|
description: |
|
||||||
|
path to file(s) needed for plugins (optional)
|
||||||
output:
|
output:
|
||||||
- vcf:
|
- vcf:
|
||||||
type: file
|
type: file
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue