PGTuner v0.x.x (x) (Coming Soon)
Cleanup development and legacy code
Stay updated with the latest features, improvements, and releases
Cleanup development and legacy code
TODO: Add automatic tests to detect error
TODO: More refined documentation with icon symbol of importance
TODO: Rewrite application as Javascript to support global user (if necessary)
The codebase has been refactored with the decision of hosting dual-backend in the same repository. Multiple functions has been revised to reduce performance noise, and multiple optimizations has been added to the codebase
We have completed the transition from Python and Javascript, with the decision of hosting dual-backend in the same repository
Full compatibility with the PostgreSQL 18, including new features and optimization
./pgtuner_cli.py
for usage.cpu_to_connection_scale_ratio
with default to 5 (range from [2.5, 10]) to support the variety of scaling the number of connections in the general optimization phase.memory_connection_to_dedicated_os_ratio
from 0.3 to 0.7.hash_mem_usage_level
is changed from -6 to -5. If hash_mem_multiplier is 2.0, the working memory usage per connection for one arbitrary operation is increased from 1.1196 to 1.1417.wal_segment_size
to 2 GiB instead of 128 MiB (scale from 3 to 7).min_wal_size_ratio
from 0.03 to 0.025.mem_pool_tuning_ratio
from 0.6 to 0.45.max_wal_size_ratio
and wal_keep_size_ratio
from 0.05 to 0.04.*_profile
parameters except the workload_profile
parameter.:enum:PG_BACKUP_TOOL
.:enum:PG_WORKLOAD
.Linux, Windows, MacOS, Container, and PaaS
.bgwriter_lru_maxpages
to be well-adapted with associated workload.archive_timeout
is from 15 minutes bumped to 30-45 minutes as many current servers don't use the log-shipping method anymore, but streaming and logical replication.join_collapse_limit, from_collapse_limit, and plan_cache_mode
.parallel_tuple_cost
configuration.max_parallel_workers
(vcpu * 1.125 ->> vcpu * 1.25 + 1) and `max_parallel_workers_per_gather` (vcpu / 3 ->> vcpu / 2.5) to have more parallel workers.cpu_tuple_cost
and parallel_tuple_cost
.default_statistics_target
.after_commit_delay
to be workload-based rather than disk-based.vacuum_*_min_age
from 25% of associated vacuum_*_max_age
to 15%.opt_wal_buffers
to be higher than PG_PROFILE_OPTMODE.NONE
.min_wal_size, max_wal_size, and wal_keep_size
(The configuration would be better detailed).statement_timeout
and lock_timeout
is reduced significantly.backend_flush_after
has been corrected from 512 MiB to 512 KiB.wal_init_zero
).bgwriter_delay
and bgwriter_lru_maxpages
.checkpoint_flush_after
is now 512 KiB across all disk specification../src/__init__.py
is improved with better logic. This includes the change in the ./src/utils, ./src/static (removed), ./src/tuner/data, ... modules
.:func:PG_TUNE_ITEM.__repr__()
and remove the argument output_if_difference_only
in :func:PG_TUNE_ITEM.out()
.:enum:PGTUNER_SCOPE.KERNEL_BOOT
optimize the boot settings for database.PG_SIZING
.The codebase has been refactored with the decision of hosting dual-backend in the same repository. Multiple functions has been revised to reduce performance noise, and multiple optimizations has been added to the codebase
We have completed the transition from Python and Javascript, with the decision of hosting dual-backend in the same repository
Full compatibility with the PostgreSQL 18, including new features and optimization
./pgtuner_cli.py
for usage.cpu_to_connection_scale_ratio
with default to 5 (range from [2.5, 10]) to support the variety of scaling the number of connections in the general optimization phase.memory_connection_to_dedicated_os_ratio
from 0.3 to 0.7.hash_mem_usage_level
is changed from -6 to -5. If hash_mem_multiplier is 2.0, the working memory usage per connection for one arbitrary operation is increased from 1.1196 to 1.1417.wal_segment_size
to 2 GiB instead of 128 MiB (scale from 3 to 7).min_wal_size_ratio
from 0.03 to 0.025.mem_pool_tuning_ratio
from 0.6 to 0.45.max_wal_size_ratio
and wal_keep_size_ratio
from 0.05 to 0.04.*_profile
parameters except the workload_profile
parameter.:enum:PG_BACKUP_TOOL
.:enum:PG_WORKLOAD
.Linux, Windows, MacOS, Container, and PaaS
.bgwriter_lru_maxpages
to be well-adapted with associated workload.archive_timeout
is from 15 minutes bumped to 30-45 minutes as many current servers don't use the log-shipping method anymore, but streaming and logical replication.join_collapse_limit, from_collapse_limit, and plan_cache_mode
.parallel_tuple_cost
configuration.max_parallel_workers
(vcpu * 1.125 -> vcpu * 1.25 + 1) and max_parallel_workers_per_gather
(vcpu / 3 ->> vcpu / 2.5) to have more parallel workers.cpu_tuple_cost
and parallel_tuple_cost
.default_statistics_target
.after_commit_delay
to be workload-based rather than disk-based.vacuum_*_min_age
from 25% of associated vacuum_*_max_age
to 15%.opt_wal_buffers
to be higher than PG_PROFILE_OPTMODE.NONE
.min_wal_size, max_wal_size, and wal_keep_size
(The configuration would be better detailed).statement_timeout
and lock_timeout
is reduced significantly.backend_flush_after
has been corrected from 512 MiB to 512 KiB.wal_init_zero
).bgwriter_delay
and bgwriter_lru_maxpages
.checkpoint_flush_after
is now 512 KiB across all disk specification../src/__init__.py
is improved with better logic. This includes the change in the ./src/utils, ./src/static (removed), ./src/tuner/data, ... modules
.:func:PG_TUNE_ITEM.__repr__()
and remove the argument output_if_difference_only
in :func:PG_TUNE_ITEM.out()
.:enum:PGTUNER_SCOPE.KERNEL_BOOT
optimize the boot settings for database.PG_SIZING
.Documentations are better understanding, helping new developers and users understand what formula is calculated inside.
frozen
attribute to indicate whether the parameter expect changes or not.temp_buffers_ratio
is changed from 1/3 (0.33) to 1/4 (0.25).max_normal_memory_usage
is changed from 0.85 to 0.80.mem_pool_epsilon_to_rollback
and mem_pool_tuning_increment
is removed from user input, and hard-coded as 0.0075 and 1/560 in the correction tuning phase.mem_pool_tuning_ratio
is changed from 0.5 to 0.6.hash_mem_usage_level
is changed from -4 to -6 -> If the PostgreSQL configuration is 2.0, the working memory usage per connection for one arbitrary operation is reduced from 1.1713 to 1.1196)mem_pool_parallel_estimate
is changed from False to True to assume at any time, PostgreSQL can use parallel operation in general, thereby hopefully reduce the working memory per connection for one arbitrary operation by around vCPU+3
unit of work_mem * average_ratio
.wal_segment_size
is reduced from 2 GiB to 128 MiB. The reason of change is added directly from the code min_wal_size, max_wal_size, and wal_keep_size
in our PostgreSQL understanding by introducing new algorithm that is based on the WAL volume capacity, ensuring checkpoint can be run in a timely manner, during burst workload, and maintained a reasonable number WAL records for streaming replication.max_wal_size_remain_upper_size
.autovacuum_utilization_ratio
is changed from 0.50 to 0.30.num_write_transaction_per_hour_on_workload
from advanced configuration to basic configuration.num_write_transaction_per_hour_on_workload
is changed from 1M (1 million) to 50K (50 thousand) -> This is translated from 270 attempted WRITE transactions to 13.5 attempted WRITE transactions.repurpose_wal_buffers
as it makes zero contribution against small server, unless you having too little RAM and a low-end HDD on the WAL partition.database_size_in_gib
in the basic configuration (default to 10 GiB and maximum at 32 TiB). This is used in the anti-wraparound tuning to be served as the minimum boundary hopefully the data volume can scan randomly at 30% WRITE IOPS on the full data files (not index files). If user don't know the amount of data they would have (for example new on-boarded application), then set to zero value meant a 60% of used volume in the data partition.autovacuum_naptime
is now 15 second for one worker and 30 seconds for each additional worker.INSERT (*_insert_threshold and *_insert_scale_factor)
is now share same value as when normal autovacuum.max_wal_size
parameter on general tuning phase.archive_timeout
is 15 minutes on large system and 30 to 1 hour on small system in general tuning phase.checkpoint_timeout
is 30 minutes on MINI profile instead of 15 minutes in general tuning phase.wal_keep_size
is default to 25 base WAL files (400 MiB as Azure).max_slot_wal_keep_size
(default to -1).default_statistics_target
has minor change.checkpoint_flush_after
is backed to 256 KiB at general tuning phase, and bump to 512 KiB and 1 MiB if data volume is strong.min_wal_size
, max_wal_size
, and wal_keep_size
in our PostgreSQL understanding by introducing new algorithm that is based on the WAL volume capacity, ensuring checkpoint can be run in a timely manner, during burst workload, and maintained a reasonable number WAL records for streaming replication.archive_timeout
is scaled by extra 10 minutes for one unit of larger WAL size (capped at 2 hour).checkpoint_timeout
added the minimum time of finishing the checkpoint (at 70 % of mixed data IOPS) depending on the type of workload (workload scale is independent) in the correction tuning phase.bgwriter_lru_maxpages
is increased when the disk performance is SSD or stronger.*_flush_after
is added into the correction tuning phaseComplete redesign of the web dashboard with improved user experience.
os_reserved_memory
parametervacuum_safety_level
parameter into the tuning guidelinebgwriter
and autovacuum
.Cleanup development and legacy code.
vacuum_failsafe_age
and vacuum_multixact_failsafe_age
parameter into our tuning guideline. Push two *_freeze_table_age
parameters into the correction tuning phase. These arguments require good estimation.Cleanup development and legacy code
_version
endpoint to the _health
endpoint with a more dedicated health check; service uptime is reported.Initial release of PGTuner.
Supported for PostgreSQL 13 -> 17