1
0
Fork 0
mirror of https://github.com/MillironX/taxprofiler.git synced 2024-11-25 16:59:54 +00:00

Merge branch 'dev' into classification_centrifuge

This commit is contained in:
sofstam 2022-03-25 13:48:23 +01:00
commit 37a70dc3e3
36 changed files with 403 additions and 377 deletions

View file

@ -8,12 +8,9 @@ trim_trailing_whitespace = true
indent_size = 4 indent_size = 4
indent_style = space indent_style = space
[*.{yml,yaml}] [*.{md,yml,yaml,html,css,scss,js}]
indent_size = 2 indent_size = 2
[*.json]
insert_final_newline = unset
# These files are edited and tested upstream in nf-core/modules # These files are edited and tested upstream in nf-core/modules
[/modules/nf-core/**] [/modules/nf-core/**]
charset = unset charset = unset

View file

@ -15,8 +15,7 @@ Contributions to the code are even more welcome ;)
If you'd like to write some code for nf-core/taxprofiler, the standard workflow is as follows: If you'd like to write some code for nf-core/taxprofiler, the standard workflow is as follows:
1. Check that there isn't already an issue about your idea in the [nf-core/taxprofiler issues](https://github.com/nf-core/taxprofiler/issues) to avoid duplicating work 1. Check that there isn't already an issue about your idea in the [nf-core/taxprofiler issues](https://github.com/nf-core/taxprofiler/issues) to avoid duplicating work. If there isn't one already, please create one so that others know you're working on this
* If there isn't one already, please create one so that others know you're working on this
2. [Fork](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) the [nf-core/taxprofiler repository](https://github.com/nf-core/taxprofiler) to your GitHub account 2. [Fork](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) the [nf-core/taxprofiler repository](https://github.com/nf-core/taxprofiler) to your GitHub account
3. Make the necessary changes / additions within your forked repository following [Pipeline conventions](#pipeline-contribution-conventions) 3. Make the necessary changes / additions within your forked repository following [Pipeline conventions](#pipeline-contribution-conventions)
4. Use `nf-core schema build` and add any new parameters to the pipeline JSON schema (requires [nf-core tools](https://github.com/nf-core/tools) >= 1.10). 4. Use `nf-core schema build` and add any new parameters to the pipeline JSON schema (requires [nf-core tools](https://github.com/nf-core/tools) >= 1.10).
@ -49,9 +48,9 @@ These tests are run both with the latest available version of `Nextflow` and als
:warning: Only in the unlikely and regretful event of a release happening with a bug. :warning: Only in the unlikely and regretful event of a release happening with a bug.
* On your own fork, make a new branch `patch` based on `upstream/master`. - On your own fork, make a new branch `patch` based on `upstream/master`.
* Fix the bug, and bump version (X.Y.Z+1). - Fix the bug, and bump version (X.Y.Z+1).
* A PR should be made on `master` from patch to directly this particular bug. - A PR should be made on `master` from patch to directly this particular bug.
## Getting help ## Getting help
@ -73,7 +72,7 @@ If you wish to contribute a new step, please use the following coding standards:
6. Add sanity checks and validation for all relevant parameters. 6. Add sanity checks and validation for all relevant parameters.
7. Perform local tests to validate that the new code works as expected. 7. Perform local tests to validate that the new code works as expected.
8. If applicable, add a new test command in `.github/workflow/ci.yml`. 8. If applicable, add a new test command in `.github/workflow/ci.yml`.
9. Update MultiQC config `assets/multiqc_config.yaml` so relevant suffixes, file name clean up and module plots are in the appropriate order. If applicable, add a [MultiQC](https://https://multiqc.info/) module. 9. Update MultiQC config `assets/multiqc_config.yml` so relevant suffixes, file name clean up and module plots are in the appropriate order. If applicable, add a [MultiQC](https://https://multiqc.info/) module.
10. Add a description of the output files and if relevant any appropriate images from the MultiQC report to `docs/output.md`. 10. Add a description of the output files and if relevant any appropriate images from the MultiQC report to `docs/output.md`.
### Default values ### Default values
@ -92,8 +91,8 @@ The process resources can be passed on to the tool dynamically within the proces
Please use the following naming schemes, to make it easy to understand what is going where. Please use the following naming schemes, to make it easy to understand what is going where.
* initial process channel: `ch_output_from_<process>` - initial process channel: `ch_output_from_<process>`
* intermediate and terminal channels: `ch_<previousprocess>_for_<nextprocess>` - intermediate and terminal channels: `ch_<previousprocess>_for_<nextprocess>`
### Nextflow version bumping ### Nextflow version bumping

View file

@ -2,7 +2,6 @@ name: Bug report
description: Report something that is broken or incorrect description: Report something that is broken or incorrect
labels: bug labels: bug
body: body:
- type: markdown - type: markdown
attributes: attributes:
value: | value: |

View file

@ -10,7 +10,6 @@ Remember that PRs should be made against the dev branch, unless you're preparing
Learn more about contributing: [CONTRIBUTING.md](https://github.com/nf-core/taxprofiler/tree/master/.github/CONTRIBUTING.md) Learn more about contributing: [CONTRIBUTING.md](https://github.com/nf-core/taxprofiler/tree/master/.github/CONTRIBUTING.md)
--> -->
<!-- markdownlint-disable ul-indent -->
## PR checklist ## PR checklist
@ -19,7 +18,7 @@ Learn more about contributing: [CONTRIBUTING.md](https://github.com/nf-core/taxp
- [ ] If you've added a new tool - have you followed the pipeline conventions in the [contribution docs](https://github.com/nf-core/taxprofiler/tree/master/.github/CONTRIBUTING.md) - [ ] If you've added a new tool - have you followed the pipeline conventions in the [contribution docs](https://github.com/nf-core/taxprofiler/tree/master/.github/CONTRIBUTING.md)
- [ ] If necessary, also make a PR on the nf-core/taxprofiler _branch_ on the [nf-core/test-datasets](https://github.com/nf-core/test-datasets) repository. - [ ] If necessary, also make a PR on the nf-core/taxprofiler _branch_ on the [nf-core/test-datasets](https://github.com/nf-core/test-datasets) repository.
- [ ] Make sure your code lints (`nf-core lint`). - [ ] Make sure your code lints (`nf-core lint`).
- [ ] Ensure the test suite passes (`nextflow run . -profile test,docker` --outdir <OUTDIR>`). - [ ] Ensure the test suite passes (`nextflow run . -profile test,docker --outdir <OUTDIR>`).
- [ ] Usage Documentation in `docs/usage.md` is updated. - [ ] Usage Documentation in `docs/usage.md` is updated.
- [ ] Output Documentation in `docs/output.md` is updated. - [ ] Output Documentation in `docs/output.md` is updated.
- [ ] `CHANGELOG.md` is updated. - [ ] `CHANGELOG.md` is updated.

View file

@ -18,13 +18,10 @@ jobs:
# TODO nf-core: You can customise AWS full pipeline tests as required # TODO nf-core: You can customise AWS full pipeline tests as required
# Add full size test data (but still relatively small datasets for few samples) # Add full size test data (but still relatively small datasets for few samples)
# on the `test_full.config` test runs with only one set of parameters # on the `test_full.config` test runs with only one set of parameters
with: with:
workspace_id: ${{ secrets.TOWER_WORKSPACE_ID }} workspace_id: ${{ secrets.TOWER_WORKSPACE_ID }}
access_token: ${{ secrets.TOWER_ACCESS_TOKEN }} access_token: ${{ secrets.TOWER_ACCESS_TOKEN }}
compute_env: ${{ secrets.TOWER_COMPUTE_ENV }} compute_env: ${{ secrets.TOWER_COMPUTE_ENV }}
pipeline: ${{ github.repository }}
revision: ${{ github.sha }}
workdir: s3://${{ secrets.AWS_S3_BUCKET }}/work/taxprofiler/work-${{ github.sha }} workdir: s3://${{ secrets.AWS_S3_BUCKET }}/work/taxprofiler/work-${{ github.sha }}
parameters: | parameters: |
{ {

View file

@ -10,15 +10,13 @@ jobs:
if: github.repository == 'nf-core/taxprofiler' if: github.repository == 'nf-core/taxprofiler'
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
# Launch workflow using Tower CLI tool action
- name: Launch workflow via tower - name: Launch workflow via tower
uses: nf-core/tower-action@v3 uses: nf-core/tower-action@v3
with: with:
workspace_id: ${{ secrets.TOWER_WORKSPACE_ID }} workspace_id: ${{ secrets.TOWER_WORKSPACE_ID }}
access_token: ${{ secrets.TOWER_ACCESS_TOKEN }} access_token: ${{ secrets.TOWER_ACCESS_TOKEN }}
compute_env: ${{ secrets.TOWER_COMPUTE_ENV }} compute_env: ${{ secrets.TOWER_COMPUTE_ENV }}
pipeline: ${{ github.repository }}
revision: ${{ github.sha }}
workdir: s3://${{ secrets.AWS_S3_BUCKET }}/work/taxprofiler/work-${{ github.sha }} workdir: s3://${{ secrets.AWS_S3_BUCKET }}/work/taxprofiler/work-${{ github.sha }}
parameters: | parameters: |
{ {

View file

@ -13,8 +13,7 @@ jobs:
- name: Check PRs - name: Check PRs
if: github.repository == 'nf-core/taxprofiler' if: github.repository == 'nf-core/taxprofiler'
run: | run: |
{ [[ ${{github.event.pull_request.head.repo.full_name }} == nf-core/taxprofiler ]] && [[ $GITHUB_HEAD_REF = "dev" ]]; } || [[ $GITHUB_HEAD_REF == "patch" ]] "{ [[ ${{github.event.pull_request.head.repo.full_name }} == nf-core/taxprofiler ]] && [[ $GITHUB_HEAD_REF = "dev" ]]; } || [[ $GITHUB_HEAD_REF == "patch" ]]"
# If the above check failed, post a comment on the PR explaining the failure # If the above check failed, post a comment on the PR explaining the failure
# NOTE - this doesn't currently work if the PR is coming from a fork, due to limitations in GitHub actions secrets # NOTE - this doesn't currently work if the PR is coming from a fork, due to limitations in GitHub actions secrets
@ -43,4 +42,4 @@ jobs:
Thanks again for your contribution! Thanks again for your contribution!
repo-token: ${{ secrets.GITHUB_TOKEN }} repo-token: ${{ secrets.GITHUB_TOKEN }}
allow-repeats: false allow-repeats: false
#

View file

@ -16,18 +16,18 @@ jobs:
test: test:
name: Run pipeline with test data name: Run pipeline with test data
# Only run on push if this is the nf-core dev branch (merged PRs) # Only run on push if this is the nf-core dev branch (merged PRs)
if: ${{ github.event_name != 'push' || (github.event_name == 'push' && github.repository == 'nf-core/taxprofiler') }} if: "${{ github.event_name != 'push' || (github.event_name == 'push' && github.repository == 'nf-core/taxprofiler') }}"
runs-on: ubuntu-latest runs-on: ubuntu-latest
strategy: strategy:
matrix: matrix:
# Nextflow versions # Nextflow versions
include: include:
# Test pipeline minimum Nextflow version # Test pipeline minimum Nextflow version
- NXF_VER: '21.10.3' - NXF_VER: "21.10.3"
NXF_EDGE: '' NXF_EDGE: ""
# Test latest edge release of Nextflow # Test latest edge release of Nextflow
- NXF_VER: '' - NXF_VER: ""
NXF_EDGE: '1' NXF_EDGE: "1"
steps: steps:
- name: Check out pipeline code - name: Check out pipeline code
uses: actions/checkout@v2 uses: actions/checkout@v2
@ -48,4 +48,5 @@ jobs:
# Remember that you can parallelise this by using strategy.matrix # Remember that you can parallelise this by using strategy.matrix
run: | run: |
nextflow run ${GITHUB_WORKSPACE} -profile test,docker --outdir ./results nextflow run ${GITHUB_WORKSPACE} -profile test,docker --outdir ./results
# TODO Add test that runs with pre-downloaded and decompressed databases
#

View file

@ -1,6 +1,7 @@
name: nf-core linting name: nf-core linting
# This workflow is triggered on pushes and PRs to the repository. # This workflow is triggered on pushes and PRs to the repository.
# It runs the `nf-core lint` and markdown lint tests to ensure that the code meets the nf-core guidelines # It runs the `nf-core lint` and markdown lint tests to ensure
# that the code meets the nf-core guidelines.
on: on:
push: push:
pull_request: pull_request:
@ -8,42 +9,6 @@ on:
types: [published] types: [published]
jobs: jobs:
Markdown:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
- name: Install markdownlint
run: npm install -g markdownlint-cli
- name: Run Markdownlint
run: markdownlint .
# If the above check failed, post a comment on the PR explaining the failure
- name: Post PR comment
if: failure()
uses: mshick/add-pr-comment@v1
with:
message: |
## Markdown linting is failing
To keep the code consistent with lots of contributors, we run automated code consistency checks.
To fix this CI test, please run:
* Install `markdownlint-cli`
* On Mac: `brew install markdownlint-cli`
* Everything else: [Install `npm`](https://www.npmjs.com/get-npm) then [install `markdownlint-cli`](https://www.npmjs.com/package/markdownlint-cli) (`npm install -g markdownlint-cli`)
* Fix the markdown errors
* Automatically: `markdownlint . --fix`
* Manually resolve anything left from `markdownlint .`
Once you push these changes the test should pass, and you can hide this comment :+1:
We highly recommend setting up markdownlint in your code editor so that this formatting is done automatically on save. Ask about it on Slack for help!
Thanks again for your contribution!
repo-token: ${{ secrets.GITHUB_TOKEN }}
allow-repeats: false
EditorConfig: EditorConfig:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
@ -55,49 +20,24 @@ jobs:
run: npm install -g editorconfig-checker run: npm install -g editorconfig-checker
- name: Run ECLint check - name: Run ECLint check
run: editorconfig-checker -exclude README.md $(git ls-files | grep -v test) run: editorconfig-checker -exclude README.md $(find .* -type f | grep -v '.git\|.py\|.md\|json\|yml\|yaml\|html\|css\|work\|.nextflow\|build\|nf_core.egg-info\|log.txt\|Makefile')
YAML: Prettier:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - uses: actions/checkout@v2
uses: actions/checkout@master
- name: 'Yamllint'
uses: karancode/yamllint-github-action@master
with:
yamllint_file_or_dir: '.'
yamllint_config_filepath: '.yamllint.yml'
# If the above check failed, post a comment on the PR explaining the failure - uses: actions/setup-node@v2
- name: Post PR comment
if: failure()
uses: mshick/add-pr-comment@v1
with:
message: |
## YAML linting is failing
To keep the code consistent with lots of contributors, we run automated code consistency checks. - name: Install Prettier
To fix this CI test, please run: run: npm install -g prettier
* Install `yamllint` - name: Run Prettier --check
* Install `yamllint` following [this](https://yamllint.readthedocs.io/en/stable/quickstart.html#installing-yamllint) run: prettier --check ${GITHUB_WORKSPACE}
instructions or alternative install it in your [conda environment](https://anaconda.org/conda-forge/yamllint)
* Fix the markdown errors
* Run the test locally: `yamllint $(find . -type f -name "*.yml" -o -name "*.yaml") -c ./.yamllint.yml`
* Fix any reported errors in your YAML files
Once you push these changes the test should pass, and you can hide this comment :+1:
We highly recommend setting up yaml-lint in your code editor so that this formatting is done automatically on save. Ask about it on Slack for help!
Thanks again for your contribution!
repo-token: ${{ secrets.GITHUB_TOKEN }}
allow-repeats: false
nf-core: nf-core:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Check out pipeline code - name: Check out pipeline code
uses: actions/checkout@v2 uses: actions/checkout@v2
@ -110,8 +50,8 @@ jobs:
- uses: actions/setup-python@v1 - uses: actions/setup-python@v1
with: with:
python-version: '3.6' python-version: "3.6"
architecture: 'x64' architecture: "x64"
- name: Install dependencies - name: Install dependencies
run: | run: |
@ -139,3 +79,4 @@ jobs:
lint_results.md lint_results.md
PR_number.txt PR_number.txt
#

View file

@ -1,4 +1,3 @@
name: nf-core linting comment name: nf-core linting comment
# This workflow is triggered after the linting action is complete # This workflow is triggered after the linting action is complete
# It posts an automated comment to the PR, even if the PR is coming from a fork # It posts an automated comment to the PR, even if the PR is coming from a fork
@ -27,4 +26,4 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
number: ${{ steps.pr_number.outputs.pr_number }} number: ${{ steps.pr_number.outputs.pr_number }}
path: linting-logs/lint_results.md path: linting-logs/lint_results.md
#

View file

@ -4,7 +4,7 @@ vscode:
extensions: # based on nf-core.nf-core-extensionpack extensions: # based on nf-core.nf-core-extensionpack
- codezombiech.gitignore # Language support for .gitignore files - codezombiech.gitignore # Language support for .gitignore files
# - cssho.vscode-svgviewer # SVG viewer # - cssho.vscode-svgviewer # SVG viewer
- davidanson.vscode-markdownlint # Markdown/CommonMark linting and style checking for Visual Studio Code - esbenp.prettier-vscode # Markdown/CommonMark linting and style checking for Visual Studio Code
- eamodio.gitlens # Quickly glimpse into whom, why, and when a line or code block was changed - eamodio.gitlens # Quickly glimpse into whom, why, and when a line or code block was changed
- EditorConfig.EditorConfig # override user/workspace settings with settings found in .editorconfig files - EditorConfig.EditorConfig # override user/workspace settings with settings found in .editorconfig files
- Gruntfuggly.todo-tree # Display TODO and FIXME in a tree view in the activity bar - Gruntfuggly.todo-tree # Display TODO and FIXME in a tree view in the activity bar

View file

@ -1,14 +0,0 @@
# Markdownlint configuration file
default: true
line-length: false
ul-indent:
indent: 4
no-duplicate-header:
siblings_only: true
no-inline-html:
allowed_elements:
- img
- p
- kbd
- details
- summary

2
.prettierignore Normal file
View file

@ -0,0 +1,2 @@
testing/
tests/

1
.prettierrc.yml Normal file
View file

@ -0,0 +1 @@
printWidth: 120

View file

@ -1,6 +0,0 @@
extends: default
rules:
document-start: disable
line-length: disable
truthy: disable

View file

@ -10,25 +10,28 @@
## Pipeline tools ## Pipeline tools
* [FastQC](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/) - [FastQC](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/)
* [MultiQC](https://pubmed.ncbi.nlm.nih.gov/27312411/) - [MultiQC](https://pubmed.ncbi.nlm.nih.gov/27312411/)
> Ewels P, Magnusson M, Lundin S, Käller M. MultiQC: summarize analysis results for multiple tools and samples in a single report. Bioinformatics. 2016 Oct 1;32(19):3047-8. doi: 10.1093/bioinformatics/btw354. Epub 2016 Jun 16. PubMed PMID: 27312411; PubMed Central PMCID: PMC5039924. > Ewels P, Magnusson M, Lundin S, Käller M. MultiQC: summarize analysis results for multiple tools and samples in a single report. Bioinformatics. 2016 Oct 1;32(19):3047-8. doi: 10.1093/bioinformatics/btw354. Epub 2016 Jun 16. PubMed PMID: 27312411; PubMed Central PMCID: PMC5039924.
* [Porechop](https://github.com/rrwick/Porechop) * [Porechop](https://github.com/rrwick/Porechop)
## Software packaging/containerisation tools ## Software packaging/containerisation tools
* [Anaconda](https://anaconda.com) - [Anaconda](https://anaconda.com)
> Anaconda Software Distribution. Computer software. Vers. 2-2.4.0. Anaconda, Nov. 2016. Web. > Anaconda Software Distribution. Computer software. Vers. 2-2.4.0. Anaconda, Nov. 2016. Web.
* [Bioconda](https://pubmed.ncbi.nlm.nih.gov/29967506/) - [Bioconda](https://pubmed.ncbi.nlm.nih.gov/29967506/)
> Grüning B, Dale R, Sjödin A, Chapman BA, Rowe J, Tomkins-Tinch CH, Valieris R, Köster J; Bioconda Team. Bioconda: sustainable and comprehensive software distribution for the life sciences. Nat Methods. 2018 Jul;15(7):475-476. doi: 10.1038/s41592-018-0046-7. PubMed PMID: 29967506. > Grüning B, Dale R, Sjödin A, Chapman BA, Rowe J, Tomkins-Tinch CH, Valieris R, Köster J; Bioconda Team. Bioconda: sustainable and comprehensive software distribution for the life sciences. Nat Methods. 2018 Jul;15(7):475-476. doi: 10.1038/s41592-018-0046-7. PubMed PMID: 29967506.
* [BioContainers](https://pubmed.ncbi.nlm.nih.gov/28379341/) - [BioContainers](https://pubmed.ncbi.nlm.nih.gov/28379341/)
> da Veiga Leprevost F, Grüning B, Aflitos SA, Röst HL, Uszkoreit J, Barsnes H, Vaudel M, Moreno P, Gatto L, Weber J, Bai M, Jimenez RC, Sachsenberg T, Pfeuffer J, Alvarez RV, Griss J, Nesvizhskii AI, Perez-Riverol Y. BioContainers: an open-source and community-driven framework for software standardization. Bioinformatics. 2017 Aug 15;33(16):2580-2582. doi: 10.1093/bioinformatics/btx192. PubMed PMID: 28379341; PubMed Central PMCID: PMC5870671. > da Veiga Leprevost F, Grüning B, Aflitos SA, Röst HL, Uszkoreit J, Barsnes H, Vaudel M, Moreno P, Gatto L, Weber J, Bai M, Jimenez RC, Sachsenberg T, Pfeuffer J, Alvarez RV, Griss J, Nesvizhskii AI, Perez-Riverol Y. BioContainers: an open-source and community-driven framework for software standardization. Bioinformatics. 2017 Aug 15;33(16):2580-2582. doi: 10.1093/bioinformatics/btx192. PubMed PMID: 28379341; PubMed Central PMCID: PMC5870671.
* [Docker](https://dl.acm.org/doi/10.5555/2600239.2600241) - [Docker](https://dl.acm.org/doi/10.5555/2600239.2600241)
* [Singularity](https://pubmed.ncbi.nlm.nih.gov/28494014/) - [Singularity](https://pubmed.ncbi.nlm.nih.gov/28494014/)
> Kurtzer GM, Sochat V, Bauer MW. Singularity: Scientific containers for mobility of compute. PLoS One. 2017 May 11;12(5):e0177459. doi: 10.1371/journal.pone.0177459. eCollection 2017. PubMed PMID: 28494014; PubMed Central PMCID: PMC5426675. > Kurtzer GM, Sochat V, Bauer MW. Singularity: Scientific containers for mobility of compute. PLoS One. 2017 May 11;12(5):e0177459. doi: 10.1371/journal.pone.0177459. eCollection 2017. PubMed PMID: 28494014; PubMed Central PMCID: PMC5426675.

View file

@ -17,11 +17,13 @@
## Introduction ## Introduction
<!-- TODO nf-core: Write a 1-2 sentence summary of what data the pipeline is for and what it does --> <!-- TODO nf-core: Write a 1-2 sentence summary of what data the pipeline is for and what it does -->
**nf-core/taxprofiler** is a bioinformatics best-practice analysis pipeline for taxonomic profiling of shotgun metagenomic data. It allows for in-parallel profiling against multiple profiling tools and databases and produces standardised output tables. **nf-core/taxprofiler** is a bioinformatics best-practice analysis pipeline for taxonomic profiling of shotgun metagenomic data. It allows for in-parallel profiling against multiple profiling tools and databases and produces standardised output tables.
The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl2.html) implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from [nf-core/modules](https://github.com/nf-core/modules) in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community! The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl2.html) implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from [nf-core/modules](https://github.com/nf-core/modules) in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!
<!-- TODO nf-core: Add full-sized test dataset and amend the paragraph below if applicable --> <!-- TODO nf-core: Add full-sized test dataset and amend the paragraph below if applicable -->
On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the [nf-core website](https://nf-co.re/taxprofiler/results). On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the [nf-core website](https://nf-co.re/taxprofiler/results).
## Pipeline summary ## Pipeline summary
@ -51,7 +53,7 @@ On release, automated continuous integration tests run the pipeline on a full-si
1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`) 1. Install [`Nextflow`](https://www.nextflow.io/docs/latest/getstarted.html#installation) (`>=21.10.3`)
2. Install any of [`Docker`](https://docs.docker.com/engine/installation/), [`Singularity`](https://www.sylabs.io/guides/3.0/user-guide/), [`Podman`](https://podman.io/), [`Shifter`](https://nersc.gitlab.io/development/shifter/how-to-use/) or [`Charliecloud`](https://hpc.github.io/charliecloud/) for full pipeline reproducibility _(please only use [`Conda`](https://conda.io/miniconda.html) as a last resort; see [docs](https://nf-co.re/usage/configuration#basic-configuration-profiles))_ 2. Install any of [`Docker`](https://docs.docker.com/engine/installation/), [`Singularity`](https://www.sylabs.io/guides/3.0/user-guide/) (you can follow [this tutorial](https://singularity-tutorial.github.io/01-installation/)), [`Podman`](https://podman.io/), [`Shifter`](https://nersc.gitlab.io/development/shifter/how-to-use/) or [`Charliecloud`](https://hpc.github.io/charliecloud/) for full pipeline reproducibility _(you can use [`Conda`](https://conda.io/miniconda.html) both to install Nextflow itself and also to manage software within pipelines. Please only use it within pipelines as a last resort; see [docs](https://nf-co.re/usage/configuration#basic-configuration-profiles))_.
3. Download the pipeline and test it on a minimal dataset with a single command: 3. Download the pipeline and test it on a minimal dataset with a single command:
@ -98,6 +100,7 @@ For further information or help, don't hesitate to get in touch on the [Slack `#
<!-- If you use nf-core/taxprofiler for your analysis, please cite it using the following doi: [10.5281/zenodo.XXXXXX](https://doi.org/10.5281/zenodo.XXXXXX) --> <!-- If you use nf-core/taxprofiler for your analysis, please cite it using the following doi: [10.5281/zenodo.XXXXXX](https://doi.org/10.5281/zenodo.XXXXXX) -->
<!-- TODO nf-core: Add bibliography of tools and data used in your pipeline --> <!-- TODO nf-core: Add bibliography of tools and data used in your pipeline -->
An extensive list of references for the tools used by the pipeline can be found in the [`CITATIONS.md`](CITATIONS.md) file. An extensive list of references for the tools used by the pipeline can be found in the [`CITATIONS.md`](CITATIONS.md) file.
You can cite the `nf-core` publication as follows: You can cite the `nf-core` publication as follows:

View file

@ -1,53 +1,111 @@
<html> <html>
<head> <head>
<meta charset="utf-8"> <meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="description" content="nf-core/taxprofiler: Taxonomic profiling of shotgun metagenomic data"> <!-- prettier-ignore -->
<meta name="description" content="nf-core/taxprofiler: Taxonomic profiling of shotgun metagenomic data" />
<title>nf-core/taxprofiler Pipeline Report</title> <title>nf-core/taxprofiler Pipeline Report</title>
</head> </head>
<body> <body>
<div style="font-family: Helvetica, Arial, sans-serif; padding: 30px; max-width: 800px; margin: 0 auto;"> <div style="font-family: Helvetica, Arial, sans-serif; padding: 30px; max-width: 800px; margin: 0 auto">
<img src="cid:nfcorepipelinelogo" />
<img src="cid:nfcorepipelinelogo"> <h1>nf-core/taxprofiler v${version}</h1>
<h2>Run Name: $runName</h2>
<h1>nf-core/taxprofiler v${version}</h1> <% if (!success){ out << """
<h2>Run Name: $runName</h2> <div
style="
<% if (!success){ color: #a94442;
out << """ background-color: #f2dede;
<div style="color: #a94442; background-color: #f2dede; border-color: #ebccd1; padding: 15px; margin-bottom: 20px; border: 1px solid transparent; border-radius: 4px;"> border-color: #ebccd1;
<h4 style="margin-top:0; color: inherit;">nf-core/taxprofiler execution completed unsuccessfully!</h4> padding: 15px;
margin-bottom: 20px;
border: 1px solid transparent;
border-radius: 4px;
"
>
<h4 style="margin-top: 0; color: inherit">nf-core/taxprofiler execution completed unsuccessfully!</h4>
<p>The exit status of the task that caused the workflow execution to fail was: <code>$exitStatus</code>.</p> <p>The exit status of the task that caused the workflow execution to fail was: <code>$exitStatus</code>.</p>
<p>The full error message was:</p> <p>The full error message was:</p>
<pre style="white-space: pre-wrap; overflow: visible; margin-bottom: 0;">${errorReport}</pre> <pre style="white-space: pre-wrap; overflow: visible; margin-bottom: 0">${errorReport}</pre>
</div> </div>
""" """ } else { out << """
} else { <div
out << """ style="
<div style="color: #3c763d; background-color: #dff0d8; border-color: #d6e9c6; padding: 15px; margin-bottom: 20px; border: 1px solid transparent; border-radius: 4px;"> color: #3c763d;
background-color: #dff0d8;
border-color: #d6e9c6;
padding: 15px;
margin-bottom: 20px;
border: 1px solid transparent;
border-radius: 4px;
"
>
nf-core/taxprofiler execution completed successfully! nf-core/taxprofiler execution completed successfully!
</div> </div>
""" """ } %>
}
%>
<p>The workflow was completed at <strong>$dateComplete</strong> (duration: <strong>$duration</strong>)</p> <p>The workflow was completed at <strong>$dateComplete</strong> (duration: <strong>$duration</strong>)</p>
<p>The command used to launch the workflow was as follows:</p> <p>The command used to launch the workflow was as follows:</p>
<pre style="white-space: pre-wrap; overflow: visible; background-color: #ededed; padding: 15px; border-radius: 4px; margin-bottom:30px;">$commandLine</pre> <pre
style="
white-space: pre-wrap;
overflow: visible;
background-color: #ededed;
padding: 15px;
border-radius: 4px;
margin-bottom: 30px;
"
>
$commandLine</pre
>
<h3>Pipeline Configuration:</h3> <h3>Pipeline Configuration:</h3>
<table style="width:100%; max-width:100%; border-spacing: 0; border-collapse: collapse; border:0; margin-bottom: 30px;"> <table
<tbody style="border-bottom: 1px solid #ddd;"> style="
<% out << summary.collect{ k,v -> "<tr><th style='text-align:left; padding: 8px 0; line-height: 1.42857143; vertical-align: top; border-top: 1px solid #ddd;'>$k</th><td style='text-align:left; padding: 8px; line-height: 1.42857143; vertical-align: top; border-top: 1px solid #ddd;'><pre style='white-space: pre-wrap; overflow: visible;'>$v</pre></td></tr>" }.join("\n") %> width: 100%;
max-width: 100%;
border-spacing: 0;
border-collapse: collapse;
border: 0;
margin-bottom: 30px;
"
>
<tbody style="border-bottom: 1px solid #ddd">
<% out << summary.collect{ k,v -> "
<tr>
<th
style="
text-align: left;
padding: 8px 0;
line-height: 1.42857143;
vertical-align: top;
border-top: 1px solid #ddd;
"
>
$k
</th>
<td
style="
text-align: left;
padding: 8px;
line-height: 1.42857143;
vertical-align: top;
border-top: 1px solid #ddd;
"
>
<pre style="white-space: pre-wrap; overflow: visible">$v</pre>
</td>
</tr>
" }.join("\n") %>
</tbody> </tbody>
</table> </table>
<p>nf-core/taxprofiler</p> <p>nf-core/taxprofiler</p>
<p><a href="https://github.com/nf-core/taxprofiler">https://github.com/nf-core/taxprofiler</a></p> <p><a href="https://github.com/nf-core/taxprofiler">https://github.com/nf-core/taxprofiler</a></p>
</div>
</div> </body>
</body>
</html> </html>

View file

@ -1,11 +0,0 @@
report_comment: >
This report has been generated by the <a href="https://github.com/nf-core/taxprofiler" target="_blank">nf-core/taxprofiler</a>
analysis pipeline. For information about how to interpret these results, please see the
<a href="https://nf-co.re/taxprofiler" target="_blank">documentation</a>.
report_section_order:
software_versions:
order: -1000
nf-core-taxprofiler-summary:
order: -1001
export_plots: true

11
assets/multiqc_config.yml Normal file
View file

@ -0,0 +1,11 @@
report_comment: >
This report has been generated by the <a href="https://github.com/nf-core/taxprofiler" target="_blank">nf-core/taxprofiler</a>
analysis pipeline. For information about how to interpret these results, please see the
<a href="https://nf-co.re/taxprofiler" target="_blank">documentation</a>.
report_section_order:
software_versions:
order: -1000
"nf-core-taxprofiler-summary":
order: -1001
export_plots: true

View file

@ -31,9 +31,6 @@
] ]
} }
}, },
"required": [ "required": ["sample", "fastq_1"]
"sample",
"fastq_1"
]
} }
} }

View file

@ -24,8 +24,7 @@ params {
// TODO nf-core: Give any required params for the test so that command line flags are not needed // TODO nf-core: Give any required params for the test so that command line flags are not needed
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/taxprofiler/samplesheet.csv' input = 'https://raw.githubusercontent.com/nf-core/test-datasets/taxprofiler/samplesheet.csv'
outdir = "./results" outdir = "./results"
// TODO replace with official once ready databases = 'https://raw.githubusercontent.com/nf-core/test-datasets/taxprofiler/database.csv'
databases = 'https://raw.githubusercontent.com/jfy133/nf-core-test-datasets/taxprofiler/database.csv'
run_kraken2 = true run_kraken2 = true
run_malt = true run_malt = true
shortread_clipmerge = true shortread_clipmerge = true

View file

@ -2,9 +2,9 @@
The nf-core/taxprofiler documentation is split into the following pages: The nf-core/taxprofiler documentation is split into the following pages:
* [Usage](usage.md) - [Usage](usage.md)
* An overview of how the pipeline works, how to run it and a description of all of the different command-line flags. - An overview of how the pipeline works, how to run it and a description of all of the different command-line flags.
* [Output](output.md) - [Output](output.md)
* An overview of the different results produced by the pipeline and how to interpret them. - An overview of the different results produced by the pipeline and how to interpret them.
You can find a lot more documentation about installing, configuring and running nf-core pipelines on the website: [https://nf-co.re](https://nf-co.re) You can find a lot more documentation about installing, configuring and running nf-core pipelines on the website: [https://nf-co.re](https://nf-co.re)

View file

@ -12,18 +12,18 @@ The directories listed below will be created in the results directory after the
The pipeline is built using [Nextflow](https://www.nextflow.io/) and processes data using the following steps: The pipeline is built using [Nextflow](https://www.nextflow.io/) and processes data using the following steps:
* [FastQC](#fastqc) - Raw read QC - [FastQC](#fastqc) - Raw read QC
* [MultiQC](#multiqc) - Aggregate report describing results and QC from the whole pipeline - [MultiQC](#multiqc) - Aggregate report describing results and QC from the whole pipeline
* [Pipeline information](#pipeline-information) - Report metrics generated during the workflow execution - [Pipeline information](#pipeline-information) - Report metrics generated during the workflow execution
### FastQC ### FastQC
<details markdown="1"> <details markdown="1">
<summary>Output files</summary> <summary>Output files</summary>
* `fastqc/` - `fastqc/`
* `*_fastqc.html`: FastQC report containing quality metrics. - `*_fastqc.html`: FastQC report containing quality metrics.
* `*_fastqc.zip`: Zip archive containing the FastQC report, tab-delimited data file and plot images. - `*_fastqc.zip`: Zip archive containing the FastQC report, tab-delimited data file and plot images.
</details> </details>
@ -42,10 +42,10 @@ The pipeline is built using [Nextflow](https://www.nextflow.io/) and processes d
<details markdown="1"> <details markdown="1">
<summary>Output files</summary> <summary>Output files</summary>
* `multiqc/` - `multiqc/`
* `multiqc_report.html`: a standalone HTML file that can be viewed in your web browser. - `multiqc_report.html`: a standalone HTML file that can be viewed in your web browser.
* `multiqc_data/`: directory containing parsed statistics from the different tools used in the pipeline. - `multiqc_data/`: directory containing parsed statistics from the different tools used in the pipeline.
* `multiqc_plots/`: directory containing static images from the report in various formats. - `multiqc_plots/`: directory containing static images from the report in various formats.
</details> </details>
@ -58,10 +58,10 @@ Results generated by MultiQC collate pipeline QC from supported tools e.g. FastQ
<details markdown="1"> <details markdown="1">
<summary>Output files</summary> <summary>Output files</summary>
* `pipeline_info/` - `pipeline_info/`
* Reports generated by Nextflow: `execution_report.html`, `execution_timeline.html`, `execution_trace.txt` and `pipeline_dag.dot`/`pipeline_dag.svg`. - Reports generated by Nextflow: `execution_report.html`, `execution_timeline.html`, `execution_trace.txt` and `pipeline_dag.dot`/`pipeline_dag.svg`.
* Reports generated by the pipeline: `pipeline_report.html`, `pipeline_report.txt` and `software_versions.yml`. The `pipeline_report*` files will only be present if the `--email` / `--email_on_fail` parameter's are used when running the pipeline. - Reports generated by the pipeline: `pipeline_report.html`, `pipeline_report.txt` and `software_versions.yml`. The `pipeline_report*` files will only be present if the `--email` / `--email_on_fail` parameter's are used when running the pipeline.
* Reformatted samplesheet files used as input to the pipeline: `samplesheet.valid.csv`. - Reformatted samplesheet files used as input to the pipeline: `samplesheet.valid.csv`.
</details> </details>

View file

@ -66,7 +66,7 @@ Note that the pipeline will create the following files in your working directory
```console ```console
work # Directory containing the nextflow working files work # Directory containing the nextflow working files
results # Finished results (configurable, see below) <OUTIDR> # Finished results in specified location (defined with --outdir)
.nextflow_log # Log file from Nextflow .nextflow_log # Log file from Nextflow
# Other nextflow hidden files, eg. history of pipeline runs and old logs. # Other nextflow hidden files, eg. history of pipeline runs and old logs.
``` ```
@ -106,25 +106,25 @@ They are loaded in sequence, so later profiles can overwrite earlier profiles.
If `-profile` is not specified, the pipeline will run locally and expect all software to be installed and available on the `PATH`. This is _not_ recommended. If `-profile` is not specified, the pipeline will run locally and expect all software to be installed and available on the `PATH`. This is _not_ recommended.
* `docker` - `docker`
* A generic configuration profile to be used with [Docker](https://docker.com/) - A generic configuration profile to be used with [Docker](https://docker.com/)
* `singularity` - `singularity`
* A generic configuration profile to be used with [Singularity](https://sylabs.io/docs/) - A generic configuration profile to be used with [Singularity](https://sylabs.io/docs/)
* `podman` - `podman`
* A generic configuration profile to be used with [Podman](https://podman.io/) - A generic configuration profile to be used with [Podman](https://podman.io/)
* `shifter` - `shifter`
* A generic configuration profile to be used with [Shifter](https://nersc.gitlab.io/development/shifter/how-to-use/) - A generic configuration profile to be used with [Shifter](https://nersc.gitlab.io/development/shifter/how-to-use/)
* `charliecloud` - `charliecloud`
* A generic configuration profile to be used with [Charliecloud](https://hpc.github.io/charliecloud/) - A generic configuration profile to be used with [Charliecloud](https://hpc.github.io/charliecloud/)
* `conda` - `conda`
* A generic configuration profile to be used with [Conda](https://conda.io/docs/). Please only use Conda as a last resort i.e. when it's not possible to run the pipeline with Docker, Singularity, Podman, Shifter or Charliecloud. - A generic configuration profile to be used with [Conda](https://conda.io/docs/). Please only use Conda as a last resort i.e. when it's not possible to run the pipeline with Docker, Singularity, Podman, Shifter or Charliecloud.
* `test` - `test`
* A profile with a complete configuration for automated testing - A profile with a complete configuration for automated testing
* Includes links to test data so needs no other parameters - Includes links to test data so needs no other parameters
### `-resume` ### `-resume`
Specify this when restarting a pipeline. Nextflow will used cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously. Specify this when restarting a pipeline. Nextflow will use cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously. For input to be considered the same, not only the names must be identical but the files' contents as well. For more info about this parameter, see [this blog post](https://www.nextflow.io/blog/2019/demystifying-nextflow-resume.html).
You can also supply a run name to resume a specific run: `-resume [run-name]`. Use the `nextflow log` command to show previous run names. You can also supply a run name to resume a specific run: `-resume [run-name]`. Use the `nextflow log` command to show previous run names.
@ -186,6 +186,7 @@ process {
``` ```
> **NB:** We specify the full process name i.e. `NFCORE_RNASEQ:RNASEQ:ALIGN_STAR:STAR_ALIGN` in the config file because this takes priority over the short name (`STAR_ALIGN`) and allows existing configuration using the full process name to be correctly overridden. > **NB:** We specify the full process name i.e. `NFCORE_RNASEQ:RNASEQ:ALIGN_STAR:STAR_ALIGN` in the config file because this takes priority over the short name (`STAR_ALIGN`) and allows existing configuration using the full process name to be correctly overridden.
>
> If you get a warning suggesting that the process selector isn't recognised check that the process name has been specified correctly. > If you get a warning suggesting that the process selector isn't recognised check that the process name has been specified correctly.
### Updating containers ### Updating containers
@ -196,7 +197,7 @@ The [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl2.html) implementatio
2. Find the latest version of the Biocontainer available on [Quay.io](https://quay.io/repository/biocontainers/pangolin?tag=latest&tab=tags) 2. Find the latest version of the Biocontainer available on [Quay.io](https://quay.io/repository/biocontainers/pangolin?tag=latest&tab=tags)
3. Create the custom config accordingly: 3. Create the custom config accordingly:
* For Docker: - For Docker:
```nextflow ```nextflow
process { process {
@ -206,7 +207,7 @@ The [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl2.html) implementatio
} }
``` ```
* For Singularity: - For Singularity:
```nextflow ```nextflow
process { process {
@ -216,7 +217,7 @@ The [Nextflow DSL2](https://www.nextflow.io/docs/latest/dsl2.html) implementatio
} }
``` ```
* For Conda: - For Conda:
```nextflow ```nextflow
process { process {

View file

@ -7,13 +7,13 @@
"git_sha": "e745e167c1020928ef20ea1397b6b4d230681b4d" "git_sha": "e745e167c1020928ef20ea1397b6b4d230681b4d"
}, },
"custom/dumpsoftwareversions": { "custom/dumpsoftwareversions": {
"git_sha": "20d8250d9f39ddb05dfb437603aaf99b5c0b2b41" "git_sha": "e745e167c1020928ef20ea1397b6b4d230681b4d"
}, },
"fastp": { "fastp": {
"git_sha": "d0a1cbb703a130c19f6796c3fce24fbe7dfce789" "git_sha": "d0a1cbb703a130c19f6796c3fce24fbe7dfce789"
}, },
"fastqc": { "fastqc": {
"git_sha": "9d0cad583b9a71a6509b754fdf589cbfbed08961" "git_sha": "e745e167c1020928ef20ea1397b6b4d230681b4d"
}, },
"kraken2/kraken2": { "kraken2/kraken2": {
"git_sha": "e745e167c1020928ef20ea1397b6b4d230681b4d" "git_sha": "e745e167c1020928ef20ea1397b6b4d230681b4d"
@ -22,10 +22,11 @@
"git_sha": "72b96f4e504eef673f2b5c13560a9d90b669129b" "git_sha": "72b96f4e504eef673f2b5c13560a9d90b669129b"
}, },
"multiqc": { "multiqc": {
"git_sha": "20d8250d9f39ddb05dfb437603aaf99b5c0b2b41" "git_sha": "e745e167c1020928ef20ea1397b6b4d230681b4d"
}, },
"untar": { "untar": {
"git_sha": "e080f4c8acf5760039ed12ec1f206170f3f9a918" "git_sha": "e080f4c8acf5760039ed12ec1f206170f3f9a918"
},
"porechop": { "porechop": {
"git_sha": "e20e57f90b6787ac9a010a980cf6ea98bd990046" "git_sha": "e20e57f90b6787ac9a010a980cf6ea98bd990046"
} }

View file

@ -15,6 +15,9 @@ process CUSTOM_DUMPSOFTWAREVERSIONS {
path "software_versions_mqc.yml", emit: mqc_yml path "software_versions_mqc.yml", emit: mqc_yml
path "versions.yml" , emit: versions path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script: script:
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
template 'dumpsoftwareversions.py' template 'dumpsoftwareversions.py'

View file

@ -8,7 +8,7 @@ tools:
description: Custom module used to dump software versions within the nf-core pipeline template description: Custom module used to dump software versions within the nf-core pipeline template
homepage: https://github.com/nf-core/tools homepage: https://github.com/nf-core/tools
documentation: https://github.com/nf-core/tools documentation: https://github.com/nf-core/tools
licence: ['MIT'] licence: ["MIT"]
input: input:
- versions: - versions:
type: file type: file

View file

@ -15,6 +15,9 @@ process FASTQC {
tuple val(meta), path("*.zip") , emit: zip tuple val(meta), path("*.zip") , emit: zip
path "versions.yml" , emit: versions path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script: script:
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
// Add soft-links to original FastQs for consistent naming in pipeline // Add soft-links to original FastQs for consistent naming in pipeline

View file

@ -15,7 +15,7 @@ tools:
overrepresented sequences. overrepresented sequences.
homepage: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/ homepage: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/
documentation: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/Help/ documentation: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/Help/
licence: ['GPL-2.0-only'] licence: ["GPL-2.0-only"]
input: input:
- meta: - meta:
type: map type: map

View file

@ -1,10 +1,10 @@
process MULTIQC { process MULTIQC {
label 'process_medium' label 'process_medium'
conda (params.enable_conda ? 'bioconda::multiqc=1.11' : null) conda (params.enable_conda ? 'bioconda::multiqc=1.12' : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ? container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/multiqc:1.11--pyhdfd78af_0' : 'https://depot.galaxyproject.org/singularity/multiqc:1.12--pyhdfd78af_0' :
'quay.io/biocontainers/multiqc:1.11--pyhdfd78af_0' }" 'quay.io/biocontainers/multiqc:1.12--pyhdfd78af_0' }"
input: input:
path multiqc_files path multiqc_files
@ -15,6 +15,9 @@ process MULTIQC {
path "*_plots" , optional:true, emit: plots path "*_plots" , optional:true, emit: plots
path "versions.yml" , emit: versions path "versions.yml" , emit: versions
when:
task.ext.when == null || task.ext.when
script: script:
def args = task.ext.args ?: '' def args = task.ext.args ?: ''
""" """

View file

@ -11,7 +11,7 @@ tools:
It's a general use tool, perfect for summarising the output from numerous bioinformatics tools. It's a general use tool, perfect for summarising the output from numerous bioinformatics tools.
homepage: https://multiqc.info/ homepage: https://multiqc.info/
documentation: https://multiqc.info/docs/ documentation: https://multiqc.info/docs/
licence: ['GPL-3.0-or-later'] licence: ["GPL-3.0-or-later"]
input: input:
- multiqc_files: - multiqc_files:
type: file type: file

View file

@ -68,9 +68,9 @@ params {
// centrifuge // centrifuge
run_centrifuge = false run_centrifuge = false
save_unaligned = false centrifuge_save_unaligned = false
save_aligned = false centrifuge_save_aligned = false
sam_format = false centrifuge_sam_format = false
} }
// Load base.config by default for all pipelines // Load base.config by default for all pipelines

View file

@ -266,5 +266,47 @@
{ {
"$ref": "#/definitions/generic_options" "$ref": "#/definitions/generic_options"
} }
] ],
"properties": {
"databases": {
"type": "string",
"default": "None"
},
"shortread_clipmerge": {
"type": "boolean"
},
"shortread_excludeunmerged": {
"type": "boolean",
"default": true
},
"longread_clip": {
"type": "boolean"
},
"run_malt": {
"type": "boolean"
},
"malt_mode": {
"type": "string",
"default": "BlastN"
},
"run_kraken2": {
"type": "boolean"
},
"run_centrifuge": {
"type": "string",
"default": "false"
},
"centrifuge_save_unaligned": {
"type": "string",
"default": "false"
},
"centrifuge_save_aligned": {
"type": "string",
"default": "false"
},
"centrifuge_sam_format": {
"type": "string",
"default": "false"
}
}
} }

View file

@ -21,7 +21,7 @@ workflow DB_CHECK {
ch_dbs_for_untar = parsed_samplesheet ch_dbs_for_untar = parsed_samplesheet
.branch { .branch {
untar: it[1].toString().endsWith(".tar.gz") && it[0]['tool']!="centrifuge" untar: it[1].toString().endsWith(".tar.gz") && it[0]['tool'] != "centrifuge"
skip: true skip: true
} }

View file

@ -24,7 +24,7 @@ if (params.databases) { ch_databases = file(params.databases) } else { exit 1, '
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/ */
ch_multiqc_config = file("$projectDir/assets/multiqc_config.yaml", checkIfExists: true) ch_multiqc_config = file("$projectDir/assets/multiqc_config.yml", checkIfExists: true)
ch_multiqc_custom_config = params.multiqc_config ? Channel.fromPath(params.multiqc_config) : Channel.empty() ch_multiqc_custom_config = params.multiqc_config ? Channel.fromPath(params.multiqc_config) : Channel.empty()
/* /*
@ -88,8 +88,9 @@ workflow TAXPROFILER {
// //
// MODULE: Run FastQC // MODULE: Run FastQC
// //
ch_input_for_fastqc = INPUT_CHECK.out.fastq.mix( INPUT_CHECK.out.nanopore ).dump(tag: "input_to_fastq")
FASTQC ( FASTQC (
INPUT_CHECK.out.fastq.mix( INPUT_CHECK.out.nanopore ) ch_input_for_fastqc
) )
ch_versions = ch_versions.mix(FASTQC.out.versions.first()) ch_versions = ch_versions.mix(FASTQC.out.versions.first())
@ -206,7 +207,7 @@ workflow TAXPROFILER {
} }
if ( params.run_centrifuge ) { if ( params.run_centrifuge ) {
CENTRIFUGE ( ch_input_for_centrifuge.reads, ch_input_for_centrifuge.db, params.save_unaligned, params.save_aligned, params.sam_format ) CENTRIFUGE ( ch_input_for_centrifuge.reads, ch_input_for_centrifuge.db, params.centrifuge_save_unaligned, params.centrifuge_save_aligned, params.centrifuge_sam_format )
} }
// //