mirror of
https://github.com/MillironX/nf-configs.git
synced 2024-11-10 20:13:09 +00:00
update docs
This commit is contained in:
parent
f41d325873
commit
99afe86ea9
2 changed files with 23 additions and 19 deletions
|
@ -27,10 +27,6 @@ params {
|
|||
|
||||
def hostname = "hostname".execute().text.trim()
|
||||
|
||||
if (hostname ==~ "r.*") {
|
||||
params.max_cpus = 20
|
||||
}
|
||||
|
||||
if (hostname ==~ "b.*") {
|
||||
params.max_memory = 109.GB
|
||||
}
|
||||
|
@ -39,6 +35,10 @@ if (hostname ==~ "i.*") {
|
|||
params.max_memory = 250.GB
|
||||
}
|
||||
|
||||
if (hostname ==~ "r.*") {
|
||||
params.max_cpus = 20
|
||||
}
|
||||
|
||||
profiles {
|
||||
devel {
|
||||
params {
|
||||
|
|
|
@ -4,7 +4,9 @@ All nf-core pipelines have been successfully configured for use on the Swedish U
|
|||
|
||||
## Using the UPPMAX config profile
|
||||
|
||||
To use, run the pipeline with `-profile uppmax` (one hyphen). This will download and launch the [`uppmax.config`](../conf/uppmax.config) which has been pre-configured with a setup suitable for the UPPMAX servers. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
|
||||
To use, run the pipeline with `-profile uppmax` (one hyphen).
|
||||
This will download and launch the [`uppmax.config`](../conf/uppmax.config) which has been pre-configured with a setup suitable for the UPPMAX servers.
|
||||
Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
|
||||
|
||||
In addition to this config profile, you will also need to specify an UPPMAX project id.
|
||||
You can do this with the `--project` flag (two hyphens) when launching nextflow. For example:
|
||||
|
@ -23,18 +25,6 @@ Just run Nextflow on a login node and it will handle everything else.
|
|||
A local copy of the iGenomes resource has been made available on all UPPMAX clusters so you should be able to run the pipeline against any reference available in the `igenomes.config`.
|
||||
You can do this by simply using the `--genome <GENOME_ID>` parameter.
|
||||
|
||||
## Running offline with Bianca
|
||||
|
||||
If running on Bianca, you will have no internet connection and these configs will not be loaded.
|
||||
Please use the nf-core helper tool on a different system to download the required pipeline files, and transfer them to bianca.
|
||||
This helper tool bundles the config files in this repo together with the pipeline files, so the profile will still be available.
|
||||
|
||||
Note that Bianca only allocates 7 GB memory per core so the max memory needs to be limited:
|
||||
|
||||
```bash
|
||||
--max_memory "112GB"
|
||||
```
|
||||
|
||||
## Getting more memory
|
||||
|
||||
If your nf-core pipeline run is running out of memory, you can run on a fat node with more memory using the following nextflow flags:
|
||||
|
@ -50,9 +40,23 @@ Note that each job will still start with the same request as normal, but restart
|
|||
|
||||
All jobs will be submitted to fat nodes using this method, so it's only for use in extreme circumstances.
|
||||
|
||||
## devel config
|
||||
## How to specify a UPPMAX cluster
|
||||
|
||||
If doing pipeline development work on Uppmax, this profile allows for faster testing.
|
||||
You actually do not need to, based on `hostnames`, configuration will be automatically applied for the cluster you are on following these specifications:
|
||||
|
||||
* `bianca`
|
||||
* cpus available: 16 cpus
|
||||
* memory available: 109 GB
|
||||
* `irma`
|
||||
* cpus available: 16 cpus
|
||||
* memory available: 250 GB
|
||||
* `rackham`
|
||||
* cpus available: 20 cpus
|
||||
* memory available: 125 GB
|
||||
|
||||
## Development config
|
||||
|
||||
If doing pipeline development work on UPPMAX, the `devel` profile allows for faster testing.
|
||||
|
||||
Applied after main UPPMAX config, it overwrites certain parts of the config and submits jobs to the `devcore` queue, which has much faster queue times.
|
||||
|
||||
|
|
Loading…
Reference in a new issue