From PostgreSQL docs:
-
Number of Database Connections -> How to Find the Optimal Database Connection Pool Size
for optimal throughput the number of active connections should be
somewhere near ((core_count * 2) + effective_spindle_count) -
Tuning Your PostgreSQL Server -> max_connections
Generally, PostgreSQL on good hardware can support a few hundred
connections.
For me – not an experienced DBA – there’s a discrepancy somewhere in here, especially looking at the offerings of some DB-as-a-Service providers.
For example, at this time Amazon RDS’s largest machine (db.r3.8xlarge) has 32 vCPUs, which according to the first formula would perhaps manage to run optimally with 100 connections in the pool, given many disks. Wouldn’t it though run very badly with the “few hundred connections” from the second formula?
Even more extreme is the discrepancy for another DBaaS provider, who proposes a 2 core server with 500 concurrent connections. How could this possibly work well?
If I’m misunderstanding something, please let me know. Many thanks!