1
0
Fork 0
mirror of https://github.com/MillironX/nf-configs.git synced 2024-11-22 16:29:55 +00:00

Merge branch 'master' into master

This commit is contained in:
James A. Fellows Yates 2020-07-07 14:34:10 +02:00 committed by GitHub
commit fcdbeaa446
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
39 changed files with 430 additions and 76 deletions

View file

@ -16,7 +16,7 @@ jobs:
needs: test_all_profiles needs: test_all_profiles
strategy: strategy:
matrix: matrix:
profile: ['awsbatch', 'bigpurple', 'binac', 'cbe', 'ccga_dx', 'ccga_med', 'cfc', 'crick', 'denbi_qbic', 'ebc', 'genotoul', 'genouest', 'gis', 'google', 'hebbe', 'kraken', 'munin', 'pasteur', 'phoenix', 'prince', 'shh', 'uct_hex', 'uppmax', 'uzh'] profile: ['awsbatch', 'bi','bigpurple', 'binac', 'cbe', 'ccga_dx', 'ccga_med', 'cfc', 'cfc_dev', 'crick', 'denbi_qbic', 'ebc', 'genotoul', 'genouest', 'gis', 'google', 'hebbe', 'icr_davros', 'kraken', 'munin', 'pasteur', 'phoenix', 'prince', 'shh', 'uct_hpc', 'uppmax', 'utd_ganymede', 'uzh']
steps: steps:
- uses: actions/checkout@v1 - uses: actions/checkout@v1
- name: Install Nextflow - name: Install Nextflow
@ -26,4 +26,5 @@ jobs:
- name: Check ${{ matrix.profile }} profile - name: Check ${{ matrix.profile }} profile
env: env:
SCRATCH: '~' SCRATCH: '~'
NXF_GLOBAL_CONFIG: awsbatch.config
run: nextflow run ${GITHUB_WORKSPACE}/configtest.nf --custom_config_base=${GITHUB_WORKSPACE} -profile ${{ matrix.profile }} run: nextflow run ${GITHUB_WORKSPACE}/configtest.nf --custom_config_base=${GITHUB_WORKSPACE} -profile ${{ matrix.profile }}

View file

@ -15,6 +15,7 @@ A repository for hosting Nextflow configuration files containing custom paramete
* [Documentation](#documentation) * [Documentation](#documentation)
* [Uploading to `nf-core/configs`](#uploading-to-nf-coreconfigs) * [Uploading to `nf-core/configs`](#uploading-to-nf-coreconfigs)
* [Adding a new pipeline-specific config](#adding-a-new-pipeline-specific-config) * [Adding a new pipeline-specific config](#adding-a-new-pipeline-specific-config)
* [Pipeline-specific institutional documentation](#pipeline-specific-institutional-documentation)
* [Pipeline-specific documentation](#pipeline-specific-documentation) * [Pipeline-specific documentation](#pipeline-specific-documentation)
* [Enabling pipeline-specific configs within a pipeline](#enabling-pipeline-specific-configs-within-a-pipeline) * [Enabling pipeline-specific configs within a pipeline](#enabling-pipeline-specific-configs-within-a-pipeline)
* [Create the pipeline-specific `nf-core/configs` files](#create-the-pipeline-specific-nf-coreconfigs-files) * [Create the pipeline-specific `nf-core/configs` files](#create-the-pipeline-specific-nf-coreconfigs-files)
@ -95,6 +96,7 @@ Currently documentation is available for the following systems:
* [AWSBATCH](docs/awsbatch.md) * [AWSBATCH](docs/awsbatch.md)
* [BIGPURPLE](docs/bigpurple.md) * [BIGPURPLE](docs/bigpurple.md)
* [BI](docs/bi.md)
* [BINAC](docs/binac.md) * [BINAC](docs/binac.md)
* [CBE](docs/cbe.md) * [CBE](docs/cbe.md)
* [CCGA_DX](docs/ccga_dx.md) * [CCGA_DX](docs/ccga_dx.md)
@ -102,21 +104,23 @@ Currently documentation is available for the following systems:
* [CFC](docs/cfc.md) * [CFC](docs/cfc.md)
* [CRICK](docs/crick.md) * [CRICK](docs/crick.md)
* [CZBIOHUB_AWS](docs/czbiohub.md) * [CZBIOHUB_AWS](docs/czbiohub.md)
* [CZBIOHUB_AWS_HIGHPRIORITY](docs/czbiohub.md)
* [DENBI_QBIC](docs/denbi_qbic.md) * [DENBI_QBIC](docs/denbi_qbic.md)
* [EBC](docs/ebc.md)
* [GENOTOUL](docs/genotoul.md) * [GENOTOUL](docs/genotoul.md)
* [GENOUEST](docs/genouest.md) * [GENOUEST](docs/genouest.md)
* [GIS](docs/gis.md) * [GIS](docs/gis.md)
* [GOOGLE](docs/google.md) * [GOOGLE](docs/google.md)
* [HEBBE](docs/hebbe.md) * [HEBBE](docs/hebbe.md)
* [ICR_DAVROS](docs/icr_davros.md)
* [KRAKEN](docs/kraken.md) * [KRAKEN](docs/kraken.md)
* [MUNIN](docs/munin.md) * [MUNIN](docs/munin.md)
* [PASTEUR](docs/pasteur.md) * [PASTEUR](docs/pasteur.md)
* [PHOENIX](docs/phoenix.md) * [PHOENIX](docs/phoenix.md)
* [PRINCE](docs/prince.md) * [PRINCE](docs/prince.md)
* [SHH](docs/shh.md) * [SHH](docs/shh.md)
* [UCT_HEX](docs/uct_hex.md) * [UCT_HPC](docs/uct_hpc.md)
* [UPPMAX](docs/uppmax.md) * [UPPMAX](docs/uppmax.md)
* [UTD_GANYMEDE](docs/utd_ganymede.md)
* [UZH](docs/uzh.md) * [UZH](docs/uzh.md)
### Uploading to `nf-core/configs` ### Uploading to `nf-core/configs`
@ -157,18 +161,28 @@ Each configuration file will add new params and overwrite the params already exi
Note that pipeline-specific configs are not required and should only be added if needed. Note that pipeline-specific configs are not required and should only be added if needed.
### Pipeline-specific documentation ### Pipeline-specific institutional documentation
Currently documentation is available for the following pipeline within the specific profile: Currently documentation is available for the following pipelines within specific profiles:
* ampliseq * ampliseq
* [BINAC](docs/pipeline/ampliseq/binac.md) * [BINAC](docs/pipeline/ampliseq/binac.md)
* [UPPMAX](docs/pipeline/ampliseq/uppmax.md)
* eager * eager
* [SHH](docs/pipeline/eager/shh.md) * [SHH](docs/pipeline/eager/shh.md)
* rnafusion
* [MUNIN](docs/pipeline/rnafusion/munin.md)
* sarek * sarek
* [MUNIN](docs/pipeline/sarek/munin.md) * [MUNIN](docs/pipeline/sarek/munin.md)
* [UPPMAX](docs/pipeline/sarek/uppmax.md) * [UPPMAX](docs/pipeline/sarek/uppmax.md)
### Pipeline-specific documentation
Currently documentation is available for the following pipeline:
* viralrecon
* [genomes](docs/pipeline/viralrecon/genomes.md)
### Enabling pipeline-specific configs within a pipeline ### Enabling pipeline-specific configs within a pipeline
:warning: **This has to be done on a fork of the `nf-core/<PIPELINE>` repository.** :warning: **This has to be done on a fork of the `nf-core/<PIPELINE>` repository.**

View file

@ -49,7 +49,7 @@ def check_config(Config, Github):
### Check Github Config now ### Check Github Config now
tests = set() tests = set()
### Ignore these profiles ### Ignore these profiles
ignore_me = ['czbiohub_aws_highpriority', 'czbiohub_aws'] ignore_me = ['czbiohub_aws']
tests.update(ignore_me) tests.update(ignore_me)
with open(Github, 'r') as ghfile: with open(Github, 'r') as ghfile:
for line in ghfile: for line in ghfile:

20
conf/bi.config Normal file
View file

@ -0,0 +1,20 @@
params{
config_profile_description = 'Boehringer Ingelheim internal profile provided by nf-core/configs.'
config_profile_contact = 'Alexander Peltzer (@apeltzer)'
config_profile_url = 'https://www.boehringer-ingelheim.com/'
}
params.globalConfig = set_global_config()
def set_global_config() {
def config = System.getenv('NXF_GLOBAL_CONFIG')
if(config == null)
{
def errorMessage = "WARNING: For bi.config requires NXF_GLOBAL_CONFIG env var to be set. Point it to global.config file if you want to use this profile."
System.err.println(errorMessage)
}else{
includeConfig config
}
return config
}

View file

@ -7,7 +7,6 @@ params {
process { process {
executor = 'slurm' executor = 'slurm'
module = 'singularity/3.4.1'
queue = { task.memory <= 170.GB ? 'c' : 'm' } queue = { task.memory <= 170.GB ? 'c' : 'm' }
clusterOptions = { task.time <= 8.h ? '--qos short': task.time <= 48.h ? '--qos medium' : '--qos long' } clusterOptions = { task.time <= 8.h ? '--qos short': task.time <= 48.h ? '--qos medium' : '--qos long' }
} }

View file

@ -1,7 +1,7 @@
//Profile config names for nf-core/configs //Profile config names for nf-core/configs
params { params {
config_profile_description = 'QBiC Core Facility cluster profile provided by nf-core/configs.' config_profile_description = 'QBiC Core Facility cluster profile provided by nf-core/configs.'
config_profile_contact = 'Alexander Peltzer (@apeltzer)' config_profile_contact = 'Gisela Gabernet (@ggabernet)'
config_profile_url = 'http://qbic.uni-tuebingen.de/' config_profile_url = 'http://qbic.uni-tuebingen.de/'
} }
@ -14,6 +14,7 @@ process {
beforeScript = 'module load devel/singularity/3.4.2' beforeScript = 'module load devel/singularity/3.4.2'
executor = 'slurm' executor = 'slurm'
queue = { task.memory > 60.GB || task.cpus > 20 ? 'qbic' : 'compute' } queue = { task.memory > 60.GB || task.cpus > 20 ? 'qbic' : 'compute' }
scratch = 'true'
} }
weblog{ weblog{

29
conf/cfc_dev.config Normal file
View file

@ -0,0 +1,29 @@
//Profile config names for nf-core/configs
params {
config_profile_description = 'QBiC Core Facility cluster dev profile without container cache provided by nf-core/configs.'
config_profile_contact = 'Gisela Gabernet (@ggabernet)'
config_profile_url = 'http://qbic.uni-tuebingen.de/'
}
singularity {
enabled = true
}
process {
beforeScript = 'module load devel/singularity/3.4.2'
executor = 'slurm'
queue = { task.memory > 60.GB || task.cpus > 20 ? 'qbic' : 'compute' }
scratch = 'true'
}
weblog{
enabled = true
url = 'https://services.qbic.uni-tuebingen.de/flowstore/workflows'
}
params {
igenomes_base = '/nfsmounts/igenomes'
max_memory = 1999.GB
max_cpus = 128
max_time = 140.h
}

View file

@ -50,6 +50,7 @@ params {
// No final slash because it's added later // No final slash because it's added later
gencode_base = "s3://czbiohub-reference/gencode" gencode_base = "s3://czbiohub-reference/gencode"
transgenes_base = "s3://czbiohub-reference/transgenes" transgenes_base = "s3://czbiohub-reference/transgenes"
refseq_base = "s3://czbiohub-reference/ncbi/genomes/refseq/"
// AWS configurations // AWS configurations
awsregion = "us-west-2" awsregion = "us-west-2"
@ -79,6 +80,12 @@ params {
transcript_fasta = "${params.gencode_base}/mouse/vM21/gencode.vM21.transcripts.ERCC92.fa" transcript_fasta = "${params.gencode_base}/mouse/vM21/gencode.vM21.transcripts.ERCC92.fa"
star = "${params.gencode_base}/mouse/vM21/STARIndex/" star = "${params.gencode_base}/mouse/vM21/STARIndex/"
} }
'AaegL5.0' {
fasta = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/GCF_002204515.2_AaegL5.0_genomic.fna"
gtf = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/GCF_002204515.2_AaegL5.0_genomic.gtf"
bed = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/GCF_002204515.2_AaegL5.0_genomic.bed"
star = "${params.refseq_base}/invertebrate/Aedes_aegypti/GCF_002204515.2_AaegL5.0/nf-core--rnaseq/reference_genome/star/"
}
} }
transgenes { transgenes {
@ -128,3 +135,12 @@ params {
} }
} }
} }
profiles {
highpriority {
process {
queue = 'highpriority-971039e0-830c-11e9-9e0b-02c5b84a8036'
}
}
}

View file

@ -1,12 +0,0 @@
/*
* -------------------------------------------------
* Nextflow config file for Chan Zuckerberg Biohub
* -------------------------------------------------
* Defines reference genomes, using iGenome paths
* Imported under the default 'standard' Nextflow
* profile in nextflow.config
*/
process {
queue = 'highpriority-971039e0-830c-11e9-9e0b-02c5b84a8036'
}

39
conf/icr_davros.config Normal file
View file

@ -0,0 +1,39 @@
/*
* -------------------------------------------------
* Nextflow nf-core config file for ICR davros HPC
* -------------------------------------------------
* Defines LSF process executor and singularity
* settings.
*
*/
params {
config_profile_description = "Nextflow nf-core profile for ICR davros HPC"
config_profile_contact = "Adrian Larkeryd (@adrlar)"
}
singularity {
enabled = true
runOptions = "--bind /mnt:/mnt --bind /data:/data"
// autoMounts = true // autoMounts sometimes causes a rare bug with the installed version of singularity
}
executor {
// This is set because of an issue with too many
// singularity containers launching at once, they
// cause an singularity error with exit code 255.
submitRateLimit = "2 sec"
}
process {
executor = "LSF"
}
params {
// LSF cluster set up with memory tied to cores,
// it can't be requested. Locked at 12G per core.
cpus = 10
max_cpus = 20
max_memory = 12.GB
max_time = 168.h
igenomes_base = "/mnt/scratch/readonly/igenomes"
}

View file

@ -0,0 +1,15 @@
// Profile config names for nf-core/configs
params {
// Specific nf-core/configs params
config_profile_contact = 'Daniel Lundin (daniel.lundin@lnu.se)'
config_profile_description = 'nf-core/ampliseq UPPMAX profile provided by nf-core/configs'
}
withName: make_SILVA_132_16S_classifier {
clusterOptions = { "-A $params.project -C fat -p node -N 1 ${params.clusterOptions ?: ''}" }
}
withName: classifier {
clusterOptions = { "-A $params.project -C fat -p node -N 1 ${params.clusterOptions ?: ''}" }
}

View file

@ -0,0 +1,10 @@
// rnafusion/munin specific profile config
params {
max_cpus = 24
max_memory = 256.GB
max_time = 72.h
// Paths
genomes_base = '/data1/references/rnafusion/dev/'
}

View file

@ -0,0 +1,13 @@
/*
* -------------------------------------------------
* Nextflow nf-core config file for ICR davros HPC
* -------------------------------------------------
*/
process {
errorStrategy = {task.exitStatus in [104,134,137,139,141,143,255] ? 'retry' : 'finish'}
maxRetries = 5
withName:MapReads {
memory = {check_resource(12.GB)}
time = {check_resource(48.h * task.attempt)}
}
}

View file

@ -1,4 +1,4 @@
// Profile config names for nf-core/configs // sarek/munin specific profile config
params { params {
// Specific nf-core/configs params // Specific nf-core/configs params
@ -7,16 +7,22 @@ params {
// Specific nf-core/sarek params // Specific nf-core/sarek params
annotation_cache = true annotation_cache = true
cadd_cache = true
cadd_indels = '/data1/cache/CADD/v1.4/InDels.tsv.gz'
cadd_indels_tbi = '/data1/cache/CADD/v1.4/InDels.tsv.gz.tbi'
cadd_wg_snvs = '/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz'
cadd_wg_snvs_tbi = '/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz.tbi'
pon = '/data1/PON/vcfs/BTB.PON.vcf.gz' pon = '/data1/PON/vcfs/BTB.PON.vcf.gz'
pon_index = '/data1/PON/vcfs/BTB.PON.vcf.gz.tbi' pon_index = '/data1/PON/vcfs/BTB.PON.vcf.gz.tbi'
snpEff_cache = '/data1/cache/snpEff/' snpeff_cache = '/data1/cache/snpEff/'
vep_cache = '/data1/cache/VEP/' vep_cache = '/data1/cache/VEP/'
vep_cache_version = '95'
} }
// Specific nf-core/sarek process configuration // Specific nf-core/sarek process configuration
process { process {
withLabel:sentieon { withLabel:sentieon {
module = {params.sentieon ? 'sentieon/201808.05' : null} module = {params.sentieon ? 'sentieon/201911.00' : null}
container = {params.sentieon ? null : container} container = {params.sentieon ? null : container}
} }
} }

View file

@ -9,9 +9,13 @@ params {
igenomeIgnore = true igenomeIgnore = true
genomes_base = params.genome == 'GRCh37' ? '/sw/data/uppnex/ToolBox/ReferenceAssemblies/hg38make/bundle/2.8/b37' : '/sw/data/uppnex/ToolBox/hg38bundle' genomes_base = params.genome == 'GRCh37' ? '/sw/data/uppnex/ToolBox/ReferenceAssemblies/hg38make/bundle/2.8/b37' : '/sw/data/uppnex/ToolBox/hg38bundle'
} }
def hostname = "hostname".execute().text.trim()
if (hostname ==~ "r.*") { if (hostname ==~ "r.*") {
params.singleCPUmem = 6400.MB params.singleCPUmem = 6400.MB
} }
if (hostname ==~ "i.*") { if (hostname ==~ "i.*") {
params.singleCPUmem = 15.GB params.singleCPUmem = 15.GB
} }

View file

@ -0,0 +1,20 @@
/*
* -------------------------------------------------
* nfcore/viralrecon custom profile Nextflow config file
* -------------------------------------------------
* Defines viral reference genomes for all environments.
*/
params {
// Genome reference file paths
genomes {
'NC_045512.2' {
fasta = "https://raw.githubusercontent.com/nf-core/test-datasets/viralrecon/genome/NC_045512.2/GCF_009858895.2_ASM985889v3_genomic.200409.fna.gz"
gff = "https://raw.githubusercontent.com/nf-core/test-datasets/viralrecon/genome/NC_045512.2/GCF_009858895.2_ASM985889v3_genomic.200409.gff.gz"
}
'MN908947.3' {
fasta = "https://raw.githubusercontent.com/nf-core/test-datasets/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.fna.gz"
gff = "https://raw.githubusercontent.com/nf-core/test-datasets/viralrecon/genome/MN908947.3/GCA_009858895.3_ASM985889v3_genomic.200409.gff.gz"
}
}
}

View file

@ -1,23 +0,0 @@
//Profile config names for nf-core/configs
params {
config_profile_description = 'University of Cape Town HEX cluster config file provided by nf-core/configs.'
config_profile_contact = 'Katie Lennard (@kviljoen)'
config_profile_url = 'http://hpc.uct.ac.za/index.php/hex-3/'
}
singularity {
enabled = true
cacheDir = "/scratch/DB/bio/singularity-containers"
}
process {
stageInMode = 'symlink'
stageOutMode = 'rsync'
queue = 'UCTlong'
clusterOptions = { "-M $params.email -m abe -l nodes=1:ppn=1:series600" }
}
executor{
executor = 'pbs'
jobName = { "$task.tag" }
}

41
conf/uct_hpc.config Normal file
View file

@ -0,0 +1,41 @@
/*
* -------------------------------------------------
* HPC cluster config file
* -------------------------------------------------
* http://www.hpc.uct.ac.za/
*/
params {
config_profile_description = 'University of Cape Town High Performance Cluster config file provided by nf-core/configs.'
config_profile_contact = 'Katie Lennard (@kviljoen)'
config_profile_url = 'http://hpc.uct.ac.za/index.php/hpc-cluster/'
singularity_cache_dir = "/bb/DB/bio/singularity-containers/"
igenomes_base = '/bb/DB/bio/rna-seq/references'
max_memory = 384.GB
max_cpus = 40
max_time = 1000.h
hpc_queue = 'ada'
hpc_account = '--account cbio'
genome = 'GRCh37'
}
singularity {
enabled = true
cacheDir = params.singularity_cache_dir
autoMounts = true
}
process {
executor = 'slurm'
queue = params.hpc_queue
// Increasing maxRetries, this will overwrite what we have in base.config
maxRetries = 4
clusterOptions = params.hpc_account
stageInMode = 'symlink'
stageOutMode = 'rsync'
}
executor {
queueSize = 15
}

View file

@ -26,7 +26,7 @@ params {
def hostname = "hostname".execute().text.trim() def hostname = "hostname".execute().text.trim()
if (hostname ==~ "b.*") { if (hostname ==~ "b.*" || hostname ==~ "s.*") {
params.max_memory = 109.GB params.max_memory = 109.GB
} }

24
conf/utd_ganymede.config Normal file
View file

@ -0,0 +1,24 @@
//Profile config names for nf-core/configs
params {
config_profile_description = 'University of Texas at Dallas HPC cluster profile provided by nf-core/configs'
config_profile_contact = 'Edmund Miller(@emiller88)'
config_profile_url = 'http://docs.oithpc.utdallas.edu/'
}
singularity {
enabled = true
envWhitelist='SINGULARITY_BINDPATH'
autoMounts = true
}
process {
beforeScript = 'module load singularity/2.4.5'
executor = 'slurm'
queue = 'genomics'
}
params {
max_memory = 32.GB
max_cpus = 16
max_time = 48.h
}

9
docs/bi.md Normal file
View file

@ -0,0 +1,9 @@
# nf-core/configs: BI Configuration
All nf-core pipelines have been successfully configured for use at Boehringer Ingelheim.
To use, run the pipeline with `-profile bi`. This will download and launch the [`bi.config`](../conf/bi.config) which has been pre-configured with a setup suitable for the BI systems. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
Before running the pipeline you will need to follow the internal documentation to run Nextflow on our systems. Similar to that, you need to set an environment variable `NXF_GLOBAL_CONFIG` to the path of the internal global config which is not publicly available here.
>NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT.

View file

@ -4,13 +4,12 @@ All nf-core pipelines have been successfully configured for use on the CLIP BATC
To use, run the pipeline with `-profile cbe`. This will download and launch the [`cbe.config`](../conf/cbe.config) which has been pre-configured with a setup suitable for the CBE cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. To use, run the pipeline with `-profile cbe`. This will download and launch the [`cbe.config`](../conf/cbe.config) which has been pre-configured with a setup suitable for the CBE cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
Before running the pipeline you will need to load Nextflow and Singularity using the environment module system on CBE. You can do this by issuing the commands below: Before running the pipeline you will need to load Nextflow using the environment module system on CBE. You can do this by issuing the commands below:
```bash ```bash
## Load Nextflow and Singularity environment modules ## Load Nextflow environment module
module purge module purge
module load nextflow/19.04.0 module load nextflow/19.04.0
module load singularity/3.2.1
``` ```
A local copy of the [AWS-iGenomes](https://registry.opendata.aws/aws-igenomes/) resource has been made available on CBE so you should be able to run the pipeline against any reference available in the `igenomes.config` specific to the nf-core pipeline. You can do this by simply using the `--genome <GENOME_ID>` parameter. A local copy of the [AWS-iGenomes](https://registry.opendata.aws/aws-igenomes/) resource has been made available on CBE so you should be able to run the pipeline against any reference available in the `igenomes.config` specific to the nf-core pipeline. You can do this by simply using the `--genome <GENOME_ID>` parameter.

View file

@ -122,3 +122,11 @@ For Human and Mouse, we use [GENCODE](https://www.gencodegenes.org/) gene annota
>NB: You will need an account to use the HPC cluster on PROFILE CLUSTER in order to run the pipeline. If in doubt contact IT. >NB: You will need an account to use the HPC cluster on PROFILE CLUSTER in order to run the pipeline. If in doubt contact IT.
>NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT. >NB: Nextflow will need to submit the jobs via the job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT.
## High Priority Queue
If you would like to run with the _High Priority_ queue, specify the `highpriority` config profile after `czbiohub_aws`. When applied after the main `czbiohub_aws` config, it overwrites the process `queue` identifier.
To use it, submit your run with with `-profile czbiohub_aws,highpriority`.
**Note that the order of config profiles here is important.** For example, `-profile highpriority,czbiohub_aws` will not work.

22
docs/icr_davros.md Normal file
View file

@ -0,0 +1,22 @@
# nf-core/configs: Institute of Cancer Research (Davros HPC) Configuration
Deployment and testing of nf-core pipelines at the Davros cluster is on-going.
To run an nf-core pipeline on Davros, run the pipeline with `-profile icr_davros`. This will download and launch the [`icr_davros.config`](../conf/icr_davros.config) which has been pre-configured with a setup suitable for the Davros HPC cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
Before running the pipeline you will need to load Nextflow using the environment module system. You can do this by issuing the commands below:
```bash
## Load Nextflow environment modules
module load Nextflow/19.10.0
```
Singularity is installed on the compute nodes of Davros, but not the login nodes. There is no module for Singularity.
A subset of the [AWS-iGenomes](https://github.com/ewels/AWS-iGenomes) resource has been made available locally on Davros so you should be able to run the pipeline against any reference available in the `igenomes.config` specific to the nf-core pipeline you want to execute. You can do this by simply using the `--genome <GENOME_ID>` parameter. Some of the more exotic genomes may not have been downloaded onto Davros so have a look in the `igenomes_base` path specified in [`icr_davros.config`](../conf/icr_davros.config), and if your genome of interest isn't present please contact [Scientific Computing](mailto:schelpdesk@icr.ac.uk).
Alternatively, if you are running the pipeline regularly for genomes that arent available in the iGenomes resource, we recommend creating a config file with paths to your reference genome indices (see [`reference genomes documentation`](https://nf-co.re/usage/reference_genomes) for instructions).
All of the intermediate files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large. All of the main output files will be saved in the `results/` directory.
>NB: Nextflow will need to submit the jobs via LSF to the HPC cluster. This can be done from an interactive or normal job. If in doubt contact Scientific Computing.

View file

@ -6,7 +6,7 @@ Extra specific configuration for the ampliseq pipeline.
To use, run the pipeline with `-profile binac`. To use, run the pipeline with `-profile binac`.
This will download and launch the ampliseq specific [`binac.config`](../conf/pipeline/ampliseq/binac.config) which has been pre-configured with a setup suitable for the BINAC cluster. This will download and launch the ampliseq specific [`binac.config`](../../../conf/pipeline/ampliseq/binac.config) which has been pre-configured with a setup suitable for the BINAC cluster.
Example: `nextflow run nf-core/ampliseq -profile binac` Example: `nextflow run nf-core/ampliseq -profile binac`

View file

@ -0,0 +1,17 @@
# nf-core/configs: uppmax ampliseq specific configuration
Extra specific configuration for the ampliseq pipeline.
## Usage
To use, run the pipeline with `-profile uppmax`.
This will download and launch the ampliseq specific [`uppmax.config`](../../../conf/pipeline/ampliseq/uppmax.config) which has been pre-configured with a setup suitable for the UPPMAX cluster.
Example: `nextflow run nf-core/ampliseq -profile uppmax`
## ampliseq specific configurations for uppmax
Specific configurations for UPPMAX has been made for ampliseq.
* Makes sure that a fat node is allocated for training and applying a Bayesian classifier.

View file

@ -6,7 +6,7 @@ Extra specific configuration for eager pipeline
To use, run the pipeline with `-profile shh`. To use, run the pipeline with `-profile shh`.
This will download and launch the eager specific [`shh.config`](../conf/pipeline/eager/shh.config) which has been pre-configured with a setup suitable for the shh cluster. This will download and launch the eager specific [`shh.config`](../../../conf/pipeline/eager/shh.config) which has been pre-configured with a setup suitable for the shh cluster.
Example: `nextflow run nf-core/eager -profile shh` Example: `nextflow run nf-core/eager -profile shh`

View file

@ -0,0 +1,18 @@
# nf-core/configs: MUNIN rnafusion specific configuration
Extra specific configuration for rnafusion pipeline
## Usage
To use, run the pipeline with `-profile munin`.
This will download and launch the rnafusion specific [`munin.config`](../../../conf/pipeline/rnafusion/munin.config) which has been pre-configured with a setup suitable for the `MUNIN` cluster.
Example: `nextflow run nf-core/rnafusion -profile munin`
## rnafusion specific configurations for MUNIN
Specific configurations for `MUNIN` has been made for rnafusion.
* `cpus`, `memory` and `time` max requirements.
* Paths to specific references and indexes

View file

@ -6,17 +6,22 @@ Extra specific configuration for sarek pipeline
To use, run the pipeline with `-profile munin`. To use, run the pipeline with `-profile munin`.
This will download and launch the sarek specific [`munin.config`](../conf/pipeline/sarek/munin.config) which has been pre-configured with a setup suitable for the MUNIN cluster. This will download and launch the sarek specific [`munin.config`](../../../conf/pipeline/sarek/munin.config) which has been pre-configured with a setup suitable for the `MUNIN` cluster.
Example: `nextflow run nf-core/sarek -profile munin` Example: `nextflow run nf-core/sarek -profile munin`
## Sarek specific configurations for MUNIN ## Sarek specific configurations for MUNIN
Specific configurations for MUNIN has been made for sarek. Specific configurations for `MUNIN` has been made for sarek.
* Params `annotation_cache` set to `true` * Params `annotation_cache` and `cadd_cache` set to `true`
* Path to `snpEff_cache`: `/data1/cache/snpEff/` * Params `vep_cache_version` set to `95`
* Path to `snpeff_cache`: `/data1/cache/snpEff/`
* Path to `vep_cache`: `/data1/cache/VEP/` * Path to `vep_cache`: `/data1/cache/VEP/`
* Path to `pon`: `/data1/PON/vcfs/BTB.PON.vcf.gz` * Path to `pon`: `/data1/PON/vcfs/BTB.PON.vcf.gz`
* Path to `pon_index`: `/data1/PON/vcfs/BTB.PON.vcf.gz.tbi` * Path to `pon_index`: `/data1/PON/vcfs/BTB.PON.vcf.gz.tbi`
* Path to `cadd_indels`: `/data1/cache/CADD/v1.4/InDels.tsv.gz`
* Path to `cadd_indels_tbi`: `/data1/cache/CADD/v1.4/InDels.tsv.gz.tbi`
* Path to `cadd_wg_snvs`: `/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz`
* Path to `cadd_wg_snvs_tbi`: `/data1/cache/CADD/v1.4/whole_genome_SNVs.tsv.gz.tbi`
* Load module `Sentieon` for Processes with `sentieon` labels * Load module `Sentieon` for Processes with `sentieon` labels

View file

@ -6,7 +6,7 @@ Extra specific configuration for sarek pipeline
To use, run the pipeline with `-profile uppmax`. To use, run the pipeline with `-profile uppmax`.
This will download and launch the sarek specific [`uppmax.config`](../conf/pipeline/sarek/uppmax.config) which has been pre-configured with a setup suitable for uppmax clusters. This will download and launch the sarek specific [`uppmax.config`](../../../conf/pipeline/sarek/uppmax.config) which has been pre-configured with a setup suitable for uppmax clusters.
Example: `nextflow run nf-core/sarek -profile uppmax` Example: `nextflow run nf-core/sarek -profile uppmax`

View file

@ -0,0 +1,9 @@
# nf-core/configs: viralrecon specific configuration
Extra specific configuration for viralrecon pipeline
## Usage
Will be used automatically when running the pipeline with the shared configs in the nf-core/configs repository
This will download and launch the viralrecon specific [`viralrecon.config`](../../../conf/pipeline/viralrecon/genomes.config) which has been pre-configured with custom genomes.

View file

@ -2,7 +2,7 @@
All nf-core pipelines have been successfully configured for use on the Department of Archaeogenetic's SDAG/CDAG clusters at the [Max Planck Institute for the Science of Human History (MPI-SHH)](http://shh.mpg.de). All nf-core pipelines have been successfully configured for use on the Department of Archaeogenetic's SDAG/CDAG clusters at the [Max Planck Institute for the Science of Human History (MPI-SHH)](http://shh.mpg.de).
To use, run the pipeline with `-profile ssh`. You can further with optimise submissions by specifying which cluster you are using with `-profile shh,sdag` or `-profile ssh,cdag`. This will download and launch the [`shh.config`](../conf/shh.config) which has been pre-configured with a setup suitable for the SDAG and CDAG clusters respectively. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. The image will currently be centrally stored here: To use, run the pipeline with `-profile shh`. You can further with optimise submissions by specifying which cluster you are using with `-profile shh,sdag` or `-profile shh,cdag`. This will download and launch the [`shh.config`](../conf/shh.config) which has been pre-configured with a setup suitable for the SDAG and CDAG clusters respectively. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline. The image will currently be centrally stored here:
```bash ```bash
/projects1/singularity_scratch/cache/ /projects1/singularity_scratch/cache/
@ -10,7 +10,7 @@ To use, run the pipeline with `-profile ssh`. You can further with optimise subm
however this will likely change to a read-only directory in the future that will be managed by the IT team. however this will likely change to a read-only directory in the future that will be managed by the IT team.
This configuration will automatically choose the correct SLURM queue (`short`,`medium`,`long`) depending on the time and memory required by each process. `-profile ssh,sdag` additionally allows for submission of jobs to the `supercruncher` queue when a job's requested memory exceeds 756GB. This configuration will automatically choose the correct SLURM queue (`short`,`medium`,`long`) depending on the time and memory required by each process. `-profile shh,sdag` additionally allows for submission of jobs to the `supercruncher` queue when a job's requested memory exceeds 756GB.
>NB: You will need an account and VPN access to use the cluster at MPI-SHH in order to run the pipeline. If in doubt contact the IT team. >NB: You will need an account and VPN access to use the cluster at MPI-SHH in order to run the pipeline. If in doubt contact the IT team.
>NB: Nextflow will need to submit the jobs via SLURM to the clusters and as such the commands above will have to be executed on one of the head nodes. If in doubt contact IT. >NB: Nextflow will need to submit the jobs via SLURM to the clusters and as such the commands above will have to be executed on one of the head nodes. If in doubt contact IT.

5
docs/uct_hpc.md Normal file
View file

@ -0,0 +1,5 @@
# nf-core/configs: UCT HPC config
University of Cape Town [High Performance Cluster](http://hpc.uct.ac.za/index.php/hpc-cluster/) config.
For help or more information, please contact Katie Lennard (@kviljoen).

18
docs/utd_ganymede.md Normal file
View file

@ -0,0 +1,18 @@
# nf-core/configs: UTD Ganymede Configuration
All nf-core pipelines have been successfully configured for use on the Ganymede HPC cluster at the [The Univeristy of Texas at Dallas](https://www.utdallas.edu/).
To use, run the pipeline with `-profile utd_ganymede`. This will download and launch the [`utd_ganymede.config`](../conf/utd_ganymede.config) which has been pre-configured with a setup suitable for the Ganymede HPC cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
Before running the pipeline you will need to load Singularity using the environment module system on Ganymede. You can do this by issuing the commands below:
```bash
## Singularity environment modules
module purge
module load singularity
```
All of the intermediate files required to run the pipeline will be stored in the `work/` directory. It is recommended to delete this directory after the pipeline has finished successfully because it can get quite large, and all of the main output files will be saved in the `results/` directory anyway.
>NB: You will need an account to use the HPC cluster on Ganymede in order to run the pipeline. If in doubt contact Ganymedeadmins.
>NB: Nextflow will need to submit the jobs via SLURM to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact GanymedeAdmins.

View file

@ -11,18 +11,18 @@
//Please use a new line per include Config section to allow easier linting/parsing. Thank you. //Please use a new line per include Config section to allow easier linting/parsing. Thank you.
profiles { profiles {
awsbatch { includeConfig "${params.custom_config_base}/conf/awsbatch.config" } awsbatch { includeConfig "${params.custom_config_base}/conf/awsbatch.config" }
bi { includeConfig "${params.custom_config_base}/conf/bi.config" }
bigpurple { includeConfig "${params.custom_config_base}/conf/bigpurple.config" } bigpurple { includeConfig "${params.custom_config_base}/conf/bigpurple.config" }
binac { includeConfig "${params.custom_config_base}/conf/binac.config" } binac { includeConfig "${params.custom_config_base}/conf/binac.config" }
cbe { includeConfig "${params.custom_config_base}/conf/cbe.config" } cbe { includeConfig "${params.custom_config_base}/conf/cbe.config" }
ccga_dx { includeConfig "${params.custom_config_base}/conf/ccga_dx.config" } ccga_dx { includeConfig "${params.custom_config_base}/conf/ccga_dx.config" }
ccga_med { includeConfig "${params.custom_config_base}/conf/ccga_med.config" } ccga_med { includeConfig "${params.custom_config_base}/conf/ccga_med.config" }
cfc { includeConfig "${params.custom_config_base}/conf/cfc.config" } cfc { includeConfig "${params.custom_config_base}/conf/cfc.config" }
cfc_dev { includeConfig "${params.custom_config_base}/conf/cfc_dev.config" }
crick { includeConfig "${params.custom_config_base}/conf/crick.config" } crick { includeConfig "${params.custom_config_base}/conf/crick.config" }
czbiohub_aws { includeConfig "${params.custom_config_base}/conf/czbiohub_aws.config" } czbiohub_aws { includeConfig "${params.custom_config_base}/conf/czbiohub_aws.config" }
czbiohub_aws_highpriority {
includeConfig "${params.custom_config_base}/conf/czbiohub_aws.config";
includeConfig "${params.custom_config_base}/conf/czbiohub_aws_highpriority.config"}
ebc { includeConfig "${params.custom_config_base}/conf/ebc.config" } ebc { includeConfig "${params.custom_config_base}/conf/ebc.config" }
icr_davros { includeConfig "${params.custom_config_base}/conf/icr_davros.config" }
genotoul { includeConfig "${params.custom_config_base}/conf/genotoul.config" } genotoul { includeConfig "${params.custom_config_base}/conf/genotoul.config" }
google { includeConfig "${params.custom_config_base}/conf/google.config" } google { includeConfig "${params.custom_config_base}/conf/google.config" }
denbi_qbic { includeConfig "${params.custom_config_base}/conf/denbi_qbic.config" } denbi_qbic { includeConfig "${params.custom_config_base}/conf/denbi_qbic.config" }
@ -35,8 +35,9 @@ profiles {
phoenix { includeConfig "${params.custom_config_base}/conf/phoenix.config" } phoenix { includeConfig "${params.custom_config_base}/conf/phoenix.config" }
prince { includeConfig "${params.custom_config_base}/conf/prince.config" } prince { includeConfig "${params.custom_config_base}/conf/prince.config" }
shh { includeConfig "${params.custom_config_base}/conf/shh.config" } shh { includeConfig "${params.custom_config_base}/conf/shh.config" }
uct_hex { includeConfig "${params.custom_config_base}/conf/uct_hex.config" } uct_hpc { includeConfig "${params.custom_config_base}/conf/uct_hpc.config" }
uppmax { includeConfig "${params.custom_config_base}/conf/uppmax.config" } uppmax { includeConfig "${params.custom_config_base}/conf/uppmax.config" }
utd_ganymede { includeConfig "${params.custom_config_base}/conf/utd_ganymede.config" }
uzh { includeConfig "${params.custom_config_base}/conf/uzh.config" } uzh { includeConfig "${params.custom_config_base}/conf/uzh.config" }
} }
@ -46,10 +47,14 @@ profiles {
params { params {
// This is a groovy map, not a nextflow parameter set // This is a groovy map, not a nextflow parameter set
hostnames = [ hostnames = [
binac: ['.binac.uni-tuebingen.de'],
cbe: ['.cbe.vbc.ac.at'], cbe: ['.cbe.vbc.ac.at'],
cfc: ['.hpc.uni-tuebingen.de'],
crick: ['.thecrick.org'], crick: ['.thecrick.org'],
icr_davros: ['.davros.compute.estate'],
genotoul: ['.genologin1.toulouse.inra.fr', '.genologin2.toulouse.inra.fr'], genotoul: ['.genologin1.toulouse.inra.fr', '.genologin2.toulouse.inra.fr'],
genouest: ['.genouest.org'], genouest: ['.genouest.org'],
uppmax: ['.uppmax.uu.se'] uppmax: ['.uppmax.uu.se'],
utd_ganymede: ['ganymede.utdallas.edu']
] ]
} }

View file

@ -10,4 +10,5 @@
profiles { profiles {
binac { includeConfig "${params.custom_config_base}/conf/pipeline/ampliseq/binac.config" } binac { includeConfig "${params.custom_config_base}/conf/pipeline/ampliseq/binac.config" }
uppmax { includeConfig "${params.custom_config_base}/conf/pipeline/ampliseq/uppmax.config" }
} }

13
pipeline/rnafusion.config Normal file
View file

@ -0,0 +1,13 @@
/*
* -------------------------------------------------
* nfcore/rnafusion custom profile Nextflow config file
* -------------------------------------------------
* Config options for custom environments.
* Cluster-specific config options should be saved
* in the conf/pipeline/rnafusion folder and imported
* under a profile name here.
*/
profiles {
munin { includeConfig "${params.custom_config_base}/conf/pipeline/rnafusion/munin.config" }
}

View file

@ -11,4 +11,5 @@
profiles { profiles {
munin { includeConfig "${params.custom_config_base}/conf/pipeline/sarek/munin.config" } munin { includeConfig "${params.custom_config_base}/conf/pipeline/sarek/munin.config" }
uppmax { includeConfig "${params.custom_config_base}/conf/pipeline/sarek/uppmax.config" } uppmax { includeConfig "${params.custom_config_base}/conf/pipeline/sarek/uppmax.config" }
icr_davros { includeConfig "${params.custom_config_base}/conf/pipeline/sarek/icr_davros.config" }
} }

View file

@ -0,0 +1,7 @@
/*
* -------------------------------------------------
* nfcore/viralrecon custom profile Nextflow config file
* -------------------------------------------------
*/
includeConfig "${params.custom_config_base}/conf/pipeline/viralrecon/genomes.config"