March 12, 2026 · 9 min read · loadtest.qa

How to Load Test Your Database: PostgreSQL, MySQL, and MongoDB

Database-specific load testing guide - pgbench for PostgreSQL, sysbench for MySQL, YCSB for MongoDB, plus application-layer testing strategies.

How to Load Test Your Database: PostgreSQL, MySQL, and MongoDB

Database load testing differs fundamentally from API load testing. Where API load testing measures end-to-end application behavior, database load testing isolates the database layer and measures its raw capacity: transactions per second, query latency under concurrent access, replication lag under write load, and connection handling limits.

Testing the database in isolation answers questions that application-layer testing cannot: What is the maximum write throughput this PostgreSQL instance can sustain before durability guarantees degrade? How does MongoDB handle mixed read-write workloads? At what point does MySQL replication lag make read replicas unreliable?

This guide covers database-specific tooling for PostgreSQL (pgbench), MySQL (sysbench), and MongoDB (YCSB), plus strategies for testing your database through your application layer with k6.

Why Database Load Testing Differs

Application-layer load tests tell you how the database performs under your specific query patterns. Database benchmark tools test the database engine’s fundamental capabilities:

  • Raw TPS (Transactions Per Second): What is the maximum throughput this instance can sustain?
  • Concurrency scaling: How does throughput and latency change as concurrent connections increase from 10 to 500?
  • Read-write ratio sensitivity: How does performance change at 80/20 vs 50/50 vs 20/80 read-write ratios?
  • Data volume effects: How does performance change as the dataset grows from 1GB to 100GB?
  • Recovery characteristics: How quickly does the database return to normal performance after a spike?

These questions require synthetic workloads that isolate the database, not application-layer traffic that mixes database performance with application code, network latency, and caching effects.

PostgreSQL: pgbench

pgbench is the official PostgreSQL benchmarking tool, included with every PostgreSQL installation. It uses the TPC-B workload (a banking transaction simulation) as its default benchmark.

Setup and Initialization

Create a dedicated database for benchmarking (never use your production database):

# Create benchmark database
createdb pgbench_test

# Initialize with scale factor (scale 100 = ~1.5GB database)
# Scale factor 1 = 100,000 accounts, ~16MB
# Scale factor 100 = 10,000,000 accounts, ~1.5GB
pgbench -i -s 100 pgbench_test

The initialization creates four tables: pgbench_accounts, pgbench_branches, pgbench_tellers, and pgbench_history.

Running the Default TPC-B Benchmark

# Basic benchmark: 10 clients, 5 threads, 60 seconds
pgbench \
  -c 10 \         # 10 concurrent clients
  -j 5 \          # 5 worker threads
  -T 60 \         # Run for 60 seconds
  -P 10 \         # Progress report every 10 seconds
  pgbench_test

# Sample output:
# pgbench (15.3)
# starting vacuum...end.
# progress: 10.0 s, 1234.5 tps, lat 8.1 ms stddev 2.3
# progress: 20.0 s, 1287.3 tps, lat 7.8 ms stddev 2.1
# progress: 30.0 s, 1245.6 tps, lat 8.0 ms stddev 2.2
# transaction type: <builtin: TPC-B (sort of)>
# scaling factor: 100
# number of clients: 10
# number of threads: 5
# duration: 60 s
# number of transactions actually processed: 74234
# latency average = 8.08 ms
# latency stddev = 2.2 ms
# tps = 1237.2 (without initial connection establishment)

Interpreting results:

  • TPS (transactions per second): The primary throughput metric. A db.r5.2xlarge AWS instance typically achieves 2,000-5,000 TPS for TPC-B.
  • Latency average: Mean transaction time. Note: this is an average - use -r flag for percentile data.
  • Latency stddev: Standard deviation. High stddev indicates inconsistent performance.

Concurrency Scaling Test

The most valuable pgbench test scales concurrent clients to find the optimal concurrency point:

#!/bin/bash
# concurrency-test.sh
# Tests database performance at different concurrency levels

DB="pgbench_test"

for CLIENTS in 1 5 10 25 50 100 200 500; do
  echo -n "Clients=$CLIENTS: "
  pgbench \
    -c $CLIENTS \
    -j $(($CLIENTS / 2 + 1)) \  # Half as many threads as clients
    -T 30 \
    -q \                         # Quiet mode
    $DB 2>/dev/null | grep "^tps" | awk '{print $3 " tps, " $6 " ms avg latency"}'
done

Sample output from this test:

Clients=1:   1456 tps, 0.7 ms avg latency
Clients=5:   4823 tps, 1.0 ms avg latency
Clients=10:  6234 tps, 1.6 ms avg latency
Clients=25:  7891 tps, 3.2 ms avg latency
Clients=50:  8123 tps, 6.2 ms avg latency
Clients=100: 7934 tps, 12.6 ms avg latency
Clients=200: 6842 tps, 29.2 ms avg latency
Clients=500: 4123 tps, 121 ms avg latency

In this example, throughput peaks at 50 concurrent clients and declines beyond that. This tells you the optimal connection pool size for this instance: setting your connection pool above 50 connections will not improve throughput and will increase latency.

Read-Only Benchmark

For read-heavy workloads, benchmark with --select-only:

pgbench \
  -c 50 \
  -j 25 \
  -T 60 \
  --select-only \
  pgbench_test

Custom SQL Benchmark

Test your actual query patterns by creating a custom pgbench script:

-- custom-query-test.pgbench
\set account_id random(1, 1000000)
SELECT id, balance, email
FROM accounts
WHERE id = :account_id;

UPDATE accounts
SET balance = balance - 10,
    updated_at = NOW()
WHERE id = :account_id;

INSERT INTO transactions (account_id, amount, created_at)
VALUES (:account_id, 10, NOW());

Run with the custom script:

pgbench -c 50 -j 25 -T 60 -f custom-query-test.pgbench pgbench_test

MySQL: sysbench

sysbench is the standard MySQL benchmark tool. It supports OLTP (online transaction processing) workloads, CPU tests, and file I/O tests.

Installation

# Ubuntu/Debian
apt-get install sysbench

# macOS
brew install sysbench

Prepare the Benchmark

# Create test tables (10 tables, 1M rows each)
sysbench oltp_read_write \
  --mysql-host=localhost \
  --mysql-port=3306 \
  --mysql-user=benchmark \
  --mysql-password=benchmark_password \
  --mysql-db=sysbench_test \
  --tables=10 \
  --table-size=1000000 \
  prepare

OLTP Read-Write Benchmark

# Mixed read-write OLTP benchmark
sysbench oltp_read_write \
  --mysql-host=localhost \
  --mysql-port=3306 \
  --mysql-user=benchmark \
  --mysql-password=benchmark_password \
  --mysql-db=sysbench_test \
  --tables=10 \
  --table-size=1000000 \
  --threads=64 \             # 64 concurrent threads
  --time=120 \               # Run for 120 seconds
  --report-interval=10 \     # Progress every 10 seconds
  run

# Sample output:
# SQL statistics:
#   queries performed:
#     read:                       2450320
#     write:                      700092
#     other:                      350046
#     total:                      3500458
#   transactions:                 175023 (1458.5 per sec.)
#   queries:                      3500458 (29170.2 per sec.)
#   ignored errors:               0      (0.0 per sec.)
#   reconnects:                   0      (0.0 per sec.)
#
# Latency (ms):
#          min:                                    3.91
#          avg:                                   43.88
#          max:                                  821.89
#          95th percentile:                       95.81
#          sum:                              7680157.39

Read-Only and Write-Only Tests

# Read-only (SELECT queries only)
sysbench oltp_read_only \
  --threads=64 --time=60 \
  --mysql-host=localhost --mysql-db=sysbench_test \
  run

# Write-only (INSERT/UPDATE/DELETE)
sysbench oltp_write_only \
  --threads=64 --time=60 \
  --mysql-host=localhost --mysql-db=sysbench_test \
  run

# Point-select only (single row lookup by primary key - tests index efficiency)
sysbench oltp_point_select \
  --threads=64 --time=60 \
  --mysql-host=localhost --mysql-db=sysbench_test \
  run

Clean Up

sysbench oltp_read_write \
  --mysql-host=localhost --mysql-db=sysbench_test \
  cleanup

MongoDB: YCSB

YCSB (Yahoo Cloud Serving Benchmark) is the standard tool for benchmarking MongoDB and other NoSQL databases. It supports six workload types that cover common NoSQL access patterns.

Installation

# Download YCSB
curl -O --location https://github.com/brianfrankcooper/YCSB/releases/latest/download/ycsb-0.17.0.tar.gz
tar xfvz ycsb-0.17.0.tar.gz
cd ycsb-0.17.0

Load Test Data

# Load 1 million records into MongoDB
./bin/ycsb load mongodb \
  -s \
  -P workloads/workloada \
  -p mongodb.url="mongodb://localhost:27017/ycsb" \
  -p recordcount=1000000

Run Workloads

YCSB defines six standard workloads:

# Workload A: 50% read, 50% update (update-heavy)
./bin/ycsb run mongodb \
  -s \
  -P workloads/workloada \
  -p mongodb.url="mongodb://localhost:27017/ycsb" \
  -p operationcount=500000 \
  -threads 64

# Workload B: 95% read, 5% update (read-mostly)
./bin/ycsb run mongodb -P workloads/workloadb \
  -p mongodb.url="mongodb://localhost:27017/ycsb" \
  -p operationcount=500000 -threads 64

# Workload C: 100% read (read-only)
./bin/ycsb run mongodb -P workloads/workloadc \
  -p mongodb.url="mongodb://localhost:27017/ycsb" \
  -p operationcount=500000 -threads 64

# Workload D: read latest (insert-heavy with recency bias)
./bin/ycsb run mongodb -P workloads/workloadd \
  -p mongodb.url="mongodb://localhost:27017/ycsb" \
  -p operationcount=500000 -threads 64

# Workload F: read-modify-write (read-update-write)
./bin/ycsb run mongodb -P workloads/workloadf \
  -p mongodb.url="mongodb://localhost:27017/ycsb" \
  -p operationcount=500000 -threads 64

YCSB outputs throughput (operations/second) and latency percentiles for each operation type. Match the workload to your application’s read-write ratio for the most relevant results.

Testing Through the Application Layer

Benchmark tools test raw database capacity. But your application does not use raw database access - it uses an ORM, connection pooler, and application logic that transforms data before returning it. Testing through the application layer reveals the combined performance of all these layers.

k6 Script for Database-Heavy Endpoints

// db-heavy-load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
  stages: [
    { duration: '2m', target: 50 },
    { duration: '10m', target: 50 },
    { duration: '2m', target: 0 },
  ],
  thresholds: {
    'http_req_duration{endpoint:search}': ['p(95)<1000'],
    'http_req_duration{endpoint:report}': ['p(95)<3000'],
    'http_req_failed': ['rate<0.01'],
  },
};

const BASE_URL = __ENV.BASE_URL || 'https://api.staging.example.com';

// Simulate queries that are known to hit the database hard
export default function() {
  // Full-text search - typically not cached
  const searchTerms = ['widget', 'gadget', 'tool', 'device', 'component'];
  const term = searchTerms[Math.floor(Math.random() * searchTerms.length)];

  const searchRes = http.get(
    `${BASE_URL}/search?q=${term}&sort=relevance&page=${Math.floor(Math.random() * 10) + 1}`,
    { tags: { endpoint: 'search' } }
  );
  check(searchRes, { 'search 200': (r) => r.status === 200 });

  sleep(2);

  // Aggregation report - complex GROUP BY query
  const reportRes = http.get(
    `${BASE_URL}/reports/sales?period=last_30_days&group_by=product_category`,
    { tags: { endpoint: 'report' } }
  );
  check(reportRes, { 'report 200': (r) => r.status === 200 });

  sleep(3);
}

Key Database Metrics to Monitor During Tests

Regardless of which tool you use, monitor these metrics during database load tests:

MetricPostgreSQLMySQLMongoDB
Transactions per secondpg_stat_database.xact_commitCom_commitserverStatus.opcounters
Active connectionspg_stat_activityThreads_connectedserverStatus.connections
Lock waitspg_stat_activity WHERE wait_event IS NOT NULLInnodb_row_lock_waitsserverStatus.locks
Buffer/cache hit rateblks_hit / (blks_hit + blks_read)Key_reads / Key_read_requestswiredTiger.cache
Replication lagpg_stat_replication.write_lagSeconds_Behind_Masterrs.printReplicationInfo()
Query throughputpg_stat_statementsslow query logdb.currentOp()
IOPSOS-level iostatOS-level iostatOS-level iostat

Common Database Bottlenecks Found in Load Tests

Lock contention: High-concurrency write tests expose tables or rows that are locked by long-running transactions. Symptoms: TPS drops while active connection count stays high; pg_stat_activity shows many queries in wait_event = 'Lock' state.

Connection exhaustion: When concurrent clients exceed max_connections, new connections are rejected. Tests reveal this as a hard ceiling on concurrency. Fix with connection pooling (PgBouncer for PostgreSQL).

Index bloat: Under sustained write load, B-tree indexes become fragmented. Performance degrades gradually over hours of testing. PostgreSQL: monitor pg_stat_user_indexes for dead tuples. Periodic VACUUM or REINDEX CONCURRENTLY resolves this.

Write amplification: On SSD storage, high write workloads can saturate I/O capacity. Monitor IOPS at the OS level during write-heavy benchmarks. If you are near the IOPS limit of your storage class, consider switching to Provisioned IOPS or a larger instance.

Replication lag: Write-heavy load tests expose replication bottlenecks. If your replica consistently lags more than a few seconds, either reduce write load to the primary, increase replica resources, or reconsider whether synchronous replication is required.

Database load testing is essential before any significant scale event. Understanding your database’s capacity limits before you need them - not during an outage - is the difference between controlled scaling and emergency firefighting. Our database performance testing service provides comprehensive benchmarking for your specific database configuration and query workload.

Know Your Scaling Ceiling

Book a free 30-minute capacity scope call with our load testing engineers. We review your architecture, traffic expectations, and upcoming scaling events — and scope the load test that will give you the data you need.

Talk to an Expert