1
0
Fork 0
mirror of https://github.com/MillironX/nf-configs.git synced 2024-11-24 17:19:54 +00:00

Simplify resource adjustments

This commit is contained in:
Bruno Grande 2022-08-31 09:10:30 -07:00
parent 9c8cb71bc7
commit c883723559
2 changed files with 36 additions and 51 deletions

View file

@ -37,58 +37,43 @@ executor {
submitRateLimit = '5 / 1 sec' submitRateLimit = '5 / 1 sec'
} }
// Disabling resource allocation tweaks for now // Adjust default resource allocations (see `../docs/sage.md`)
// process {
// params {
// max_memory = 500.GB cpus = { check_max( 1 * slow(task.attempt), 'cpus' ) }
// max_cpus = 64 memory = { check_max( 6.GB * task.attempt, 'memory' ) }
// max_time = 168.h // One week time = { check_max( 24.h * task.attempt, 'time' ) }
// }
// // Process-specific resource requirements
// process { withLabel:process_low {
// cpus = { check_max( 4 * slow(task.attempt), 'cpus' ) }
// cpus = { check_max( 1 * slow(task.attempt), 'cpus' ) } memory = { check_max( 12.GB * task.attempt, 'memory' ) }
// memory = { check_max( 6.GB * task.attempt, 'memory' ) } time = { check_max( 24.h * task.attempt, 'time' ) }
// time = { check_max( 24.h * task.attempt, 'time' ) } }
// withLabel:process_medium {
// // Process-specific resource requirements cpus = { check_max( 12 * slow(task.attempt), 'cpus' ) }
// withLabel:process_low { memory = { check_max( 36.GB * task.attempt, 'memory' ) }
// cpus = { check_max( 4 * slow(task.attempt), 'cpus' ) } time = { check_max( 48.h * task.attempt, 'time' ) }
// memory = { check_max( 12.GB * task.attempt, 'memory' ) } }
// time = { check_max( 24.h * task.attempt, 'time' ) } withLabel:process_high {
// } cpus = { check_max( 24 * slow(task.attempt), 'cpus' ) }
// withLabel:process_medium { memory = { check_max( 72.GB * task.attempt, 'memory' ) }
// cpus = { check_max( 12 * slow(task.attempt), 'cpus' ) } time = { check_max( 96.h * task.attempt, 'time' ) }
// memory = { check_max( 36.GB * task.attempt, 'memory' ) } }
// time = { check_max( 48.h * task.attempt, 'time' ) } withLabel:process_long {
// } time = { check_max( 192.h * task.attempt, 'time' ) }
// withLabel:process_high { }
// cpus = { check_max( 24 * slow(task.attempt), 'cpus' ) } withLabel:process_high_memory {
// memory = { check_max( 72.GB * task.attempt, 'memory' ) } memory = { check_max( 128.GB * task.attempt, 'memory' ) }
// time = { check_max( 96.h * task.attempt, 'time' ) } }
// }
// withLabel:process_long { }
// time = { check_max( 192.h * task.attempt, 'time' ) }
// }
// withLabel:process_high_memory {
// memory = { check_max( 128.GB * task.attempt, 'memory' ) }
// }
//
// // Preventing Sarek labels from using the actual maximums
// withLabel:memory_max {
// memory = { check_max( 128.GB * task.attempt, 'memory' ) }
// }
// withLabel:cpus_max {
// cpus = { check_max( 24 * slow(task.attempt), 'cpus' ) }
// }
//
// }
// Function to slow the increase of the resource multipler // Function to slow the increase of the resource multipler
// as attempts are made. The rationale is that some CPUs // as attempts are made. The rationale is that the number
// don't need to be increased as fast as memory. // of CPU cores isn't a limiting factor as often as memory.
def slow(attempt, factor = 2) { def slow(attempt, factor = 2) {
return Math.ceil( attempt / factor) as int return Math.ceil( attempt / factor) as int
} }

View file

@ -11,8 +11,8 @@ This global configuration includes the following tweaks:
- Increase the default chunk size for multipart uploads to S3 - Increase the default chunk size for multipart uploads to S3
- Slow down job submission rate to avoid overwhelming any APIs - Slow down job submission rate to avoid overwhelming any APIs
- Define the `check_max()` function, which is missing in Sarek v2 - Define the `check_max()` function, which is missing in Sarek v2
- (Disabled temporarily) Slow the increase in the number of allocated CPU cores on retries - Slow the increase in the number of allocated CPU cores on retries
- (Disabled temporarily) Increase the default time limits because we run pipelines on AWS - Increase the default time limits because we run pipelines on AWS
## Additional information about iGenomes ## Additional information about iGenomes