Compare Host Sites

Database Server Hosting - MySQL, PostgreSQL, and MongoDB

Database Server Hosting: MySQL, PostgreSQL, and MongoDB


Database performance problems almost always trace back to one of three causes: insufficient memory forcing buffer pool reads from disk, storage I/O that cannot sustain transaction throughput, or CPU contention from too many concurrent queries sharing a resource pool. Dedicated bare metal servers eliminate all three from the hosting side of the equation.

This article covers specific configuration parameters for MySQL, PostgreSQL, and MongoDB on dedicated hardware, with reference values for InMotion Hosting’s server tiers. These are starting points, not universal settings. Your workload characteristics will require adjustment, but having verified starting values is faster than tuning from defaults.

Why Database Workloads Belong on Dedicated Hardware

Databases are uniquely sensitive to the resource contention that shared hosting creates. When MySQL’s InnoDB buffer pool reaches its size limit and starts evicting pages, every subsequent query that needs an evicted page requires a disk read. On a shared environment, another tenant’s traffic can push your buffer pool occupancy down at the worst possible moment.

On a dedicated server, the buffer pool holds what you configured it to hold. If you allocate 100GB to InnoDB, you have 100GB. Full stop. The predictability this creates is not a minor convenience. It is the difference between a database that performs consistently under load and one that behaves unpredictably.

That surprises a lot of database administrators who have normalized performance variability as an inherent database characteristic. Much of it is actually a hosting artifact.

MySQL / MariaDB Configuration

InnoDB Buffer Pool Sizing

The single most important MySQL configuration decision is InnoDB buffer pool size. The target is to keep your entire working dataset in memory. On a 64GB server (InMotion Essential or Advanced), allocate 40-48GB to the buffer pool. On a 192GB system, you can reasonably allocate 140-160GB:

innodb_buffer_pool_size = 140G (on 192GB Extreme server)

innodb_buffer_pool_instances = 8 (reduce mutex contention on multi-core systems)

innodb_log_file_size = 2G (larger redo logs reduce checkpoint frequency)

innodb_flush_log_at_trx_commit = 1 (full ACID compliance; adjust to 2 for write-heavy non-critical workloads)

innodb_io_capacity = 2000 (increase to 4000+ for NVMe drives to allow full I/O utilization)

NVMe-Specific Tuning for MySQL

MySQL defaults were written assuming spinning disk or SATA SSD. On NVMe, several parameters need adjustment to avoid artificially throttling I/O throughput:

innodb_io_capacity_max = 8000 (allows burst I/O utilization on NVMe)

innodb_read_io_threads = 8 (increase from default 4 to utilize NVMe parallelism)

innodb_write_io_threads = 8 (same reason)

innodb_flush_method = O_DIRECT (bypasses OS page cache to prevent double-buffering with InnoDB buffer pool)

With O_DIRECT enabled on NVMe, MySQL bypasses the OS page cache and manages its own buffer pool entirely. This prevents the buffer pool and the OS from independently caching the same data, which on a 192GB system would waste substantial memory if both layers tried to cache the dataset.

Slow Query Logging

Enable slow query logging at the 100ms threshold as a permanent monitoring tool, not just during troubleshooting:

slow_query_log = ON

long_query_time = 0.1 (100ms threshold)

log_queries_not_using_indexes = ON (catches full table scans even if they complete quickly)

PostgreSQL Configuration

Memory Settings

PostgreSQL memory configuration is less aggressive than MySQL by default because it is designed to run alongside other processes. On a dedicated database server, you can push much higher:

shared_buffers = 48GB (25% of RAM on 192GB system; PostgreSQL docs recommend 25% as a starting point, though some workloads benefit from higher values)

effective_cache_size = 144GB (tells the query planner how much memory is available for caching; set to 75% of RAM)

work_mem = 256MB (per sort/hash operation; multiply by max_connections for total potential usage; conservative starting value)

maintenance_work_mem = 4GB (used for VACUUM, CREATE INDEX, and similar operations)

max_wal_size = 8GB (reduces checkpoint frequency for write-heavy workloads)

shared_buffers at 25% is a PostgreSQL convention, not a ceiling. Workloads with large frequently-accessed datasets benefit from values up to 40% of RAM, with effective_cache_size raised proportionally. The query planner uses effective_cache_size to decide between index scans and sequential scans, so an inaccurate value leads to suboptimal query plans.

Connection Pooling with PgBouncer

PostgreSQL spawns one process per connection. At 200+ concurrent connections, process context switching overhead becomes measurable. PgBouncer acts as a connection proxy that maintains a smaller pool of actual PostgreSQL connections, multiplexing hundreds of application connections through them.

For applications with hundreds of concurrent users, install PgBouncer on the same server and configure applications to connect to PgBouncer on port 6432. Transaction-mode pooling is appropriate for most web application workloads; session-mode pooling is required for applications that use temporary tables or advisory locks.

NVMe and WAL Performance

PostgreSQL write performance is heavily influenced by WAL (Write-Ahead Log) write throughput. Every committed transaction writes a WAL record before returning to the client. On NVMe, WAL fsync operations complete in microseconds vs. milliseconds on SATA SSDs. This directly improves transaction throughput for write-heavy workloads.

Configure wal_compression = on to reduce WAL volume for read-heavy workloads with occasional large writes. For analytics replicas receiving streaming replication, NVMe on both primary and replica ensures replication lag stays minimal even during heavy write periods.

MongoDB Configuration

WiredTiger Cache Sizing

MongoDB’s WiredTiger storage engine uses an internal cache separate from the OS page cache. The default sets WiredTiger cache to 50% of RAM minus 1GB. On a 192GB system, that is approximately 95GB. For dedicated database servers, you can increase this:

storage.wiredTiger.engineConfig.cacheSizeGB: 120 (in mongod.conf for a 192GB dedicated server)

WiredTiger performs compression on its internal cache (Snappy by default). On NVMe storage with CPU headroom to spare, zstd compression provides better ratios with acceptable CPU overhead, reducing the effective I/O load for large document collections.

Read/Write Concern and Journal Configuration

For replica set deployments on a single dedicated server running multiple mongod instances, configure write concern appropriately:

w: majority for financial or critical data (waits for majority of replica set to acknowledge)

j: true enables journaling, which writes to NVMe before acknowledging; acceptable latency cost on NVMe

readPreference: secondaryPreferred for read-heavy workloads distributes read load across replica members

RAID Strategy for Database Workloads

InMotion dedicated servers use software RAID configured with mdadm. This matters for understanding the performance characteristics, because software RAID on NVMe with a modern multi-core CPU behaves differently than traditional hardware RAID controllers with battery-backed write cache.

RAID LevelUsable CapacityRead PerformanceWrite PerformanceUse CaseRAID 0 (stripe)7.68TB (full)2x sequential2x sequentialScratch space, non-critical dataRAID 1 (mirror, InMotion default)3.84TBRead from either driveWrite to both drivesProduction databasesNo RAID (single drive)3.84TBFull NVMe speedFull NVMe speedRead replicas with external backup

For production database servers, RAID 1 via mdadm is the right default. The write penalty is minimal on NVMe (both drives are fast enough that mirrored writes stay ahead of most application throughput requirements), and the redundancy protects against a single drive failure without data loss during the replacement window.

RAID is not a backup strategy. A software bug that corrupts the data directory, an accidental DROP TABLE, or ransomware affects both mirrored drives simultaneously. Premier Care’s automated 500GB backup storage provides the actual protection against those failure modes.

Backup Strategy for Production Databases

MySQL Backup

For MySQL databases under 50GB, nightly mysqldump with –single-transaction produces consistent backups without locking tables. For larger databases, Percona XtraBackup performs hot physical backups that restore faster than SQL dumps. Store backups to Premier Care’s 500GB backup volume, which sits off-server.

PostgreSQL Backup

pg_dump for smaller databases; pg_basebackup for physical base backups of larger instances. For near-zero RPO requirements, configure continuous WAL archiving to the backup volume: every completed WAL segment ships automatically, giving point-in-time recovery capability with typically 5-10 minute granularity.

MongoDB Backup

mongodump provides logical backups; for larger deployments, filesystem-level snapshots of the WiredTiger data directory (taken when the database is idle or at a consistent point) are faster to restore. For replica set deployments, taking backups from a secondary member avoids any impact on primary write throughput.

Choosing the Right Dedicated Server Tier

Database SizeConcurrent ConnectionsRecommended TierMonthly CostUnder 20GB working setUp to 100Essential (64GB DDR4)$99.99/mo20-50GB working set100-300Advanced (64GB DDR4, RAID-1)$149.99/mo50-140GB working set300-500Elite$199.99/mo140GB+ working set500+Extreme (192GB DDR5 ECC)$349.99/mo

These thresholds assume the server is dedicated to the database workload. Mixed servers hosting the application layer alongside the database need larger memory headroom across all tiers.

Getting Started

Dedicated server pricing: inmotionhosting.com/dedicated-servers/dedicated-server-price

NVMe dedicated servers: inmotionhosting.com/dedicated-servers/nvme

Premier Care for automated backups: inmotionhosting.com/blog/inmotion-premier-care/

InMotion Hosting’s APS team handles OS-level management and can assist with initial configuration under Premier Care. The 1-hour monthly InMotion Solutions consultation is worth using for database tuning review, particularly when migrating a production database from shared hosting where the performance improvement is typically substantial.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *