name: picard_markduplicates description: Locate and tag duplicate reads in a BAM file keywords: - markduplicates - pcr - duplicates - bam - sam - cram tools: - picard: description: | A set of command line tools (in Java) for manipulating high-throughput sequencing (HTS) data and formats such as SAM/BAM/CRAM and VCF. homepage: https://broadinstitute.github.io/picard/ documentation: https://broadinstitute.github.io/picard/ params: - outdir: type: string description: | The pipeline's output directory. By default, the module will output files into `$params.outdir/` - publish_dir_mode: type: string description: | Value for the Nextflow `publishDir` mode parameter. Available: symlink, rellink, link, copy, copyNoFollow, move. - conda: type: boolean description: | Run the module with Conda using the software specified via the `conda` directive input: - meta: type: map description: | Groovy Map containing sample information e.g. [ id:'test', single_end:false ] - bam: type: file description: BAM file pattern: "*.{bam}" - options: type: map description: | Groovy Map containing module options for passing command-line arguments and output file paths. output: - meta: type: map description: | Groovy Map containing sample information e.g. [ id:'test', single_end:false ] - bam: type: file description: BAM file with duplicate reads marked/removed pattern: "*.{bam}" - metrics: type: file description: Duplicate metrics file generated by picard pattern: "*.{metrics.txt}" - version: type: file description: File containing software version pattern: "*.{version.txt}" authors: - "@drpatelh"