Logo
Published on

MySQL Connection Pools: Why They Matter and How They Work

TL;DR: Using a pool lets you keep a small set of ready MySQL connections instead of making a new one for every request, so work is faster, delays are steadier, the database stays healthy, and extra requests can wait instead of crashing the system.

Table of Contents

A connection pool? What is it?

Before diving into the details of a connection pool, let me first describe what a pool is.

A pool is a resource manager for MySQL connections. It maintains a (limited) set of pre-created, reusable objects and hands them out on demand for executing queries on the database.

[HTTP Clients]
      |
      v
+------------------+
|  Express Routes  |
+------------------+
      |
      v  acquire()                    release()
+------------------+   in use list   -----------+
|  MySQL Pool      | <-------------- | Conn #3  |
|------------------|                 +----------+
| Free: [#1][#2]   |  getConnection() -> gives #2
| InUse:[#3][#4]   |  (if none free & < max -> create new)
| WaitQ: [R5,R6]   |  (if at max -> enqueue requester)
+------------------+
      |
      v (execute query over persistent TCP/TLS)
+------------------+
|     MySQL DB     |
+------------------+

Disclaimer: I used ChatGPT to build this ☝ diagram.

Key costs of creating a new MySQL connection (each request if no pool):

  • TCP handshake (1-2 RTT)
  • Optional TLS negotiation (multiple RTT + crypto)
  • MySQL protocol handshake (capabilities, auth)
  • Authentication (password hash / plugin round-trip)
  • Session initialization (SET NAMES, time_zone, etc.)

RTT: Round-Trip Time; the time it takes for a message to go from client to server and for the reply (ACK) to return.

That easily adds roughly 3-10 ms on a low-latency (same-region) network; over a WAN or TLS + auth plugins it can be higher.

Under high traffic, thousands of short-lived connections waste CPU (context switches, kernel buffers) and inflate the tail latency (P99, P99.9).

Scenarios

  • Single shared connection: Simple, but one slow query blocks everyone (head-of-line blocking). Not scalable.

  • New connection per request: Eliminates blocking, but high overhead + risk of exhausting MySQL max_connections => ERR 1040.

  • Proper pool (e.g., size 10-50): Reuses sessions, smooths spikes, enforces an upper bound on concurrent DB work, applies queue/backpressure instead of letting the app explode.

With a pool during a burst

No pooling would open one new connection per request

Without any pooling, a burst of requests causes the application to open a brand new TCP/TLS + MySQL session for each request.

This amplifies CPU usage, increases tail latency, and risks hitting max_connections quickly.

                          +--------------------+
                          |      HTTP Clients  |
                          |  (Browsers/Mobile) |
                          +----------+---------+
                                     |
                                     | HTTP requests (burst)
                                     v
        +-------------------+   acquire()    +----------------------------------+
        |    Node.js API    +--------------->|        Connection Pool           |
        |  (Express Routes) |                |----------------------------------|
        +----------+--------+                | Free:  [c7][c8][c9]              |
                   ^                         | InUse: [c1][c2][c3][c4]          |
                   |                         | WaitQ: (req17, req18, req19...)  |
                   | release()/rows          | Max: 40  Idle Timeout: 5m        |
                   |                         +-----------+----------------------+
                   |                                     |
                   |                         reuse socket|query
                   |                                     v
                                                +------------------+
                                                |   MySQL Server   |
                                                | (Threads = Conns)|
                                                +------------------+

Flow (left to right):
1. Clients send bursts of requests.
2. Routes call `pool.getConnection()`.
3. If a free connection exists: hand it out.
4. Else if total < max: create new, add to InUse.
5. Else: enqueue request in WaitQ (backpressure).
6. Query executes over persistent connection.
7. Results returned; connection released back to Free (or to next waiter).

Legend:
[ci] = live TCP/TLS + authenticated MySQL session
Free = idle, ready
InUse = currently executing
WaitQ = queued callers waiting for a slot

Return paths (results / release) flow right -> left (omitted arrows to reduce clutter).

For 1000 simultaneous HTTP requests, with a pool size of 40, only 40 queries run concurrently. The remaining 960 wait in an in-memory queue (fast, minimal overhead).

fast in-memory wait: lightweight queue in the application memory.

Reused connections skip most of the handshake => lower p50 and especially p95 and p99 latency.

Backpressure: if queue grows, you can reject early (protect DB).

Trade-offs / tuning:

Pool too small: artificial queuing => higher wait time.

Pool too large: resembles no pooling (thrashes the db), increases MySQL thread/cache pressure.

Suggestion: Start with (CPU cores x 2-4) for OLTP style workloads; refine using measurements (throughput vs average queue wait / tail latency).

Measure: use metrics (in_use, idle, pending_acquire, avg acquire time, avg query time).

Failure / edge behaviors:

Connection leak (forgetting to release): pool exhaustion => all future requests hang or time out.

Long transactions occupying pooled connections reduce effective pool size. Idle timeout / server closes connection: driver must detect stale socket and recreate transparently.

Code examples (TypeScript, Express, mysql2/promise):

Poor (new connection per request):

// Bad!
import express from 'express'
import mysql from 'mysql2/promise'

const app = express()

app.get('/users/:id', async (req, res) => {
  const conn = await mysql.createConnection({
    host: 'localhost',
    user: 'app',
    password: 'pwd',
    database: 'appdb',
  })
  try {
    const [rows] = await conn.execute('SELECT * FROM users WHERE id = ?', [req.params.id])
    res.json(rows)
  } finally {
    // Costly connection teardown.
    // CPU 📈. Performance 📉
    await conn.end()
  }
})

app.listen(3000)

Better (connection pool):

// pool.js start
import mysql from 'mysql2/promise'

export const pool = mysql.createPool({
  host: process.env.DB_HOST,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  database: process.env.DB_NAME,
  waitForConnections: true,
  connectionLimit: 40, // Tune
  queueLimit: 0, // 0 = unlimited queue (consider a cap)
  enableKeepAlive: true,
  keepAliveInitialDelay: 10000,
})
// pool.js end

// index.js start
import express from 'express'
import { pool } from './db/pool'

const app = express()

app.get('/users/:id', async (req, res, next) => {
  let conn
  try {
    conn = await pool.getConnection() // Acquire
    // Removed explicit transaction for a simple SELECT
    const [rows] = await conn.execute('SELECT * FROM users WHERE id = ?', [req.params.id])
    res.json(rows)
  } catch (e) {
    next(e)
  } finally {
    conn?.release() // Critical: return to pool
  }
})

app.get('/health/db', async (_req, res) => {
  const [row] = await pool.query('SELECT 1 AS OK')
  res.json({ db: row ? 'up' : 'down' })
})

app.listen(3000)
// index.js end

Using pooled query shortcut (auto acquire/release for simple statements):

import { pool } from './pool'

export async function getUser(id: string) {
  const [rows] = await pool.execute('SELECT * FROM users WHERE id = ?', [id])

  return rows
}

Detecting pool pressure:

import { pool } from '../db/pool'

export function logPoolStats() {
  const anyPool = pool
  // NOTE: These are internal properties. Test them in your environment before releasing to production.
  console.log({
    inUse: anyPool._allConnections.length - anyPool._freeConnections.length,
    free: anyPool._freeConnections.length,
    total: anyPool._allConnections.length,
    pending: anyPool._acquiringConnections.length,
  })
}

If you skip pooling under load:

  • Higher CPU on both app & DB just for handshakes.
  • More TIME_WAIT sockets on app host (risk hitting ephemeral port exhaustion).
  • Spiky latency (due to new connection setup latency variation).
  • Greater chance of hitting MySQL connection/thread limits.

If you use pooling properly:

  • Stable, lower latency.
  • Predictable concurrency.
  • Lower per-request CPU usage.
  • Easier to implement graceful shutdown (drain pool, finish inflight).

Bonus: Graceful Shutdown Example

import express from 'express'
import mysql from 'mysql2/promise'
import http from 'http'

const pool = mysql.createPool({
  host: process.env.DB_HOST,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
  database: process.env.DB_NAME,
  waitForConnections: true,
  connectionLimit: 20,
})

const app = express()

let activeRequests = 0
let shuttingDown = false

// Track in-flight requests & block new ones during shutdown
app.use((req, res, next) => {
  if (shuttingDown) {
    res.status(503).json({ error: 'Server shutting down' })

    return
  }

  activeRequests++

  res.on('finish', () => activeRequests--)

  next()
})

// Demo route using the pool
app.get('/users/:id', async (req, res, next) => {
  let conn
  try {
    conn = await pool.getConnection()
    const [rows] = await conn.execute('SELECT * FROM users WHERE id = ?', [req.params.id])
    res.json(rows)
  } catch (e) {
    next(e)
  } finally {
    conn?.release()
  }
})

const server = http.createServer(app)

server.listen(3000, () => {
  console.log('Listening on :3000')
})

// Graceful shutdown logic
async function shutdown(signal: string) {
  if (shuttingDown) {
    return
  }

  shuttingDown = true

  console.log(`${signal} received: initiating graceful shutdown`)

  // 1. Stop accepting new connections
  server.close(() => {
    console.log('HTTP server closed')
  })

  const deadline = Date.now() + 10000 // 10s max

  // 2. Wait for in-flight requests (or timeout)
  while (activeRequests > 0 && Date.now() < deadline) {
    console.log(`Waiting: ${activeRequests} active request(s)...`)

    await new Promise((r) => setTimeout(r, 250))
  }

  if (activeRequests > 0) {
    console.warn('Forcing shutdown with active requests remaining')
  }

  // 3. Drain pool (reject further getConnection calls)
  try {
    await pool.end()

    console.log('Connection pool drained')
  } catch (e) {
    console.error('Error draining pool', e)
  }

  process.exit(0)
}

;['SIGTERM', 'SIGINT'].forEach((sig) => process.on(sig as NodeJS.Signals, () => shutdown(sig)))

process
  .on('uncaughtException', (err) => {
    console.error('uncaughtException', err)
    shutdown('uncaughtException')
  })
  .on('unhandledRejection', (reason, promise) => {
    console.error('unhandledRejection', { reason, promise })
    shutdown('unhandledRejection')
  })

When not to pool:

  • CLI applications: short-lived, single requests (e.g. scripts, cron jobs)
  • Purely single operation scripts where connection startup dominates total runtime
  • Note: In serverless (e.g. AWS Lambda, Cloud Functions) you still often use a module-level pool reused across warm invocations or rely on a managed proxy (RDS Proxy, Cloud SQL Auth Proxy) to avoid opening a new connection every cold start. Total concurrency can still exhaust MySQL limits if each function instance maintains its own connection.