- Session Pooling: Each client connection is given a dedicated connection from the PgBouncer pool for its entire duration. This mode does not provide any connection multiplexing benefit.
- Statement pooling: Assigns client connections to the pooled server connections on a per-query basis. This mode does not allow multi-statement transactions, which is a non-starter for many use-cases.
- Transaction Pooling: Assigns client connections to the pooled server connections on a per-transaction level, and does allow multi-statement transactions. This is the most suitable for the vast majority of workloads, and what is used by all PlanetScale PgBouncer instances.
- Connect directly to your Postgres primary server, with no PgBouncer.
To do so, use the credentials and host provided to you when creating a role and use port
5432. This provides the lowest latency, and is often used for situations like data imports where we need full Postgres session capabilities. - All Postgres databases include instances of PgBouncer running alongside your Postgres primary.
You should use this for all connections from application clients.
To use PgBouncer, all you need to do is change your connection port to
6432. - Since all production databases come with replicas, you can also connect to these for read-only transactions.
To do so, append
|replicato your postgres role username and use port5432. Doing this automatically routes your queries to replicas.
Direct primary connections
Direct connections provide the lowest-latency access to your Postgres instance.
- Administrative tasks, like creating new databases/schemas, manual DDL commands, and installing extensions.
- Long-running operations like
VACUUMs and large analytical queries that are executed infrequently. - Importing data during a migration or other bulk-loading operations.
- When you need features like
SET, pub/sub, and other features not provided by PgBouncer pooled connections.
max_connections to a conservative default value that varies depending on cluster size.
To find this, navigate to the “Cluster configuration” page and select the “Parameters” tab.

max_connections and it will appear, showing you the current configured value.
You can increase this if necessary, though you must take care in doing so as increasing direct connections can negatively impact performance.
When you hit the max_connections limit, you’ll see error messages like:
Local PgBouncer
Connections from application servers should be made via an instance of PgBouncer whenever possible. All Postgres databases include a Local PgBouncer instance. This is a PgBouncer process running on the same host node as the Postgres Primary.
max_client_conn, which can also be adjusted via the Cluster configuration page.
There are several important configuration options that you can tune for PgBouncer to ensure it’s working optimally for your setup:
max_client_connThe overall max number of connections the local PgBouncer will allow from clients.default_pool_sizeThe number of connections allowed between each Postgres role / database pair. We want to keep this value low (typically 20 or less) to keep the number of direct connections low.max_db_connectionsControls the max number of direct connections PgBouncer can establish with Postgres.0indicates no limit, but you can put a different value here to put a hard limit, regardless of the number of roles and databases. You may reasonably want to put this at something like50to ensure you never have too many Postgres connection processes running simultaneously.max_user_connectionsis similar tomax_db_connectionsbut is a cap on a per-role level rather than per-database. Most users won’t need to put a hard cap on this and can leave it as unlimited (0).max_db_client_connectionscontrols the max number of connections that can be made by a specific role. You likely don’t want to put a hard cap on this, especially if you’re operating in a serverless environment. But in case you do, set this to something non-zero.

Connect directly to replicas
Replicas are maintained for high-availability purposes, and can also be used to handle query traffic. Since replicas are read-only, they are only capable of servingSELECT queries.
All write traffic (INSERT, UPDATE, etc) must be sent to the primary.
Replicas always experience some level of replication lag — the delay between data arriving at the primary and a replica.
Frequently, replication lag is measured in milliseconds, but it can grow to multiple seconds, especially when the server is experiencing high write traffic or network issues.
Because of these factors, you should only send queries to replicas if (A) they are read-only and (B) they can tolerate being slightly out-of-sync with the data on the primary.
For reads that cannot tolerate this, send them to the primary.
To connect to a replica, simply append |replica to your credential username.
