Tuning your PostgreSQL Server for performance
PostgreSQL is an open-source, enterprise-level relational database management system known for its robustness. Various organizations choose PostgreSQL as it supports advanced data types, complies with the ACID model, is easily scalable and has strong security features. Since many mission-critical applications use PostgreSQL, database administrators have to ensure they have a monitoring system in place to help identify pain points and tune the database as and when necessary to enhance database performance.
Lets look in to some of the most important configuration parameters in tuning the PostgreSQL database server:
Memory Tuning
Settings | shared_buffers |
---|---|
Description | Shared buffers are a specific section of memory designated to cache
frequently accessed index blocks and data. By utilizing shared buffers,
the database system can enhance its performance by minimizing the need
to read data from disk, which is typically slow.
Note: Increasing this configuration requires an
increase in
max_wal_size
in order to process the larger amount of data when a checkpoint
occurs. |
Default | 128 MB |
Recommended | 25% of the RAM |
Settings | work_mem |
---|---|
Description | Work memory in PostgreSQL is the memory allocated for executing
individual operations, like sorting or hashing. This memory holds
temporary data structures and intermediate results generated during the
operation. The amount of work memory required by an operation depends on
the complexity and size of input data, as well as available system
memory. Inadequate work memory can cause operations to use disk-based
temporary files, resulting in slower performance.
Note: Postgres 13 presents a fresh configuration named
hash_mem_multiplier, which serves as an extra option to optimize the
allocation of memory. |
Default | 4 MB |
Recommended | A reasonable value can be chosen based on the amount of data that is queried and brought into memory. You might also have to consider the max_connections parameter before setting a value, as several connections may be running in parallel. (Total RAM used = work_mem * concurrent connections). Finding the optimal value for work_mem can be challenging, but a reasonable default value that could work universally is around 64 MB. |
Settings | maintenance_work_mem |
---|---|
Description | The maintenance_work_mem parameter controls the amount of memory used for maintenance operations such as VACUUM, index creation, and adding foreign keys. Since only one of these operations can be executed at a time in one database session it is safe to set this value larger than work_mem. Larger settings might improve performance for vacuuming and for restoring database dumps. |
Default | 64 MB |
Recommended | One thing to be considered here is that when autovacuum runs, memory consumed = autovacuum_max_workers x maintenance_work_mem. The value of this can be 512 MB. |
Cache Tuning
Settings | effective_cache_size |
---|---|
Description | The effective cache size parameter specifies the amount of memory that the database server can use for caching data and index blocks in memory. This parameter represents the estimated amount of memory used by the operating system file system cache and any other processes that may be running on the same system. The effective cache size parameter is utilized by the query planner to calculate the cost of various query execution plans. This setting allows the planner to make an informed estimate of the amount of data that is expected to be cached in memory, which is then used to make optimal decisions regarding query execution plan selection. |
Default | 4 GB |
Recommended | 50% of the memory |
Worker Parameters
Settings | max_worker_processes |
---|---|
Description | The max_worker_processes parameter in PostgreSQL sets a limit on the number of concurrent background worker processes that can be running simultaneously. These worker processes perform various functions such as parallel query execution, handling connections, and background tasks. |
Default | 8 |
Recommended | It is recommended to use 50% of the processors so that the system can run normally. |
Settings | max_parallel_workers |
---|---|
Description | The max_parallel_workers parameter in PostgreSQL limits the total count of parallel worker processes that can be active simultaneously. These worker processes are employed to enhance system performance during parallel query execution and other operations. Careful configuration of this parameter is crucial because it can significantly impact the resource utilization and overall performance of the system. |
Default | 2 |
Recommended | It is recommended to use 50% of the processors so that the system can run optimally. |
Settings | max_parallel_workers_per_gather |
---|---|
Description | The max_parallel_workers_per_gather parameter in PostgreSQL manages the maximum count of parallel worker processes that can be engaged in a single operation. This parameter sets an upper boundary on the number of workers that can execute parallel table scans and query plans. It can significantly impact the performance of parallel queries, and requires careful configuration that considers the system resources and workload characteristics. |
Default | 2 |
Recommended | Parallel workers are taken from the pool of processors established by the max_worker_processes and limited by max_parallel_workers. |
Others
Settings | max_wal_size |
---|---|
Description | The max_wal_size parameter in PostgreSQL sets an upper limit on the size of the write-ahead log (WAL) that the database system can use. The WAL is a critical part of PostgreSQL's transaction management mechanism, responsible for maintaining data consistency and durability even in the event of system crashes or failures. Configuring this parameter allows you to set the maximum size for WAL grow during automatic checkpoints. WAL size can exceed max_wal_size under special circumstances, such as heavy load, a failing archive_command, or a high wal_keep_size setting. |
Default | 1 GB |
Recommended | Increasing this parameter can increase the amount of time needed for crash recovery. |
Settings | default_statistics_target |
---|---|
Description | The default_statistics_target configuration parameter specifies the level of detail used by the query optimizer when collecting statistics about the database. the number of rows that are examined by the optimizer to determine the most efficient query execution plan is governed by this parameter . It determines the sample size used by the optimizer when analyzing a table for this purpose. |
Default | Sample size = 300 * default_statistics_target |
Recommended | Larger values increase the time needed to work on ANALYZE, but might improve the quality of the planner's estimates. |
Caution: Changing a configuration will have an impact on the system. It is recommended to test the system with a test load before making the modifications to the production setup.
Applications Manager is a highly effective tool for monitoring PostgreSQL database performance. With its extensive monitoring capabilities, it helps database administrators ensure that the database operates smoothly with minimal downtime and predictable overhead costs in the long term. You can configure alarms on important metrics and get notified when thresholds are violated. This can serve as a heads-up for you to tune your database to enhance performance and ensure hassle-free user experiences!
For a hands-on experience of using Applications Manager to tune PostgreSQL performance
Download 30-day free trial now!