Compare Host Sites

Multi-server architecture planning guide for dedicated infrastructure

Multi-Server Architecture Planning for Dedicated Infrastructure


A single dedicated server handles most production web applications well. At some point, it doesn’t — either because traffic has grown beyond what one server can serve, because you need redundancy so a hardware failure doesn’t take the application offline, or because your database has become large enough that it should run on dedicated hardware…

When a Single Server Becomes the Wrong Answer

The trigger points for moving to multi-server architecture are specific. General “we’re growing” reasoning isn’t enough — the costs and complexity of multi-server infrastructure are real, and single-server optimization often extends the runway further than teams expect.

Move to multi-server architecture when:

Load average consistently exceeds your core count during normal traffic hours, not just during spikes. A 16-core server with sustained load average above 20 is queuing work.

Your database and application compete for the same RAM. Redis caching, MySQL InnoDB buffer pool, PHP-FPM workers, and application memory all share the same physical RAM on a single server. At some point, database performance and web tier performance are directly trading off against each other.

A hardware failure would be a business incident. If server downtime during replacement (typically 2-4 hours) would cost you materially, you need redundancy.

Deployment requires downtime. Multi-server setups allow rolling deployments; single-server deployments often require taking the application offline during updates.

Tier 1: Web + Database Separation

The first meaningful multi-server configuration separates the web application tier from the database tier. This addresses the RAM contention problem and allows each server to be optimized for its role.

Web server: Nginx/Apache, PHP-FPM, application code, Redis cache. CPU-optimized configuration. InMotion’s Essential or Elite tier fits most applications at this stage.

Database server: MySQL/PostgreSQL, large InnoDB buffer pool (70-80% of RAM), optimized disk I/O configuration. Memory-optimized configuration. The Extreme server’s 192GB DDR5 RAM makes an excellent dedicated database server — a 130-150GB InnoDB buffer pool keeps most production databases entirely in-memory.

Network connectivity between the two servers matters. Both servers should be provisioned in the same InMotion data center to ensure low-latency private network communication. Application configuration points database connections to the database server’s private IP rather than localhost:

// WordPress wp-config.php

define(‘DB_HOST’, ‘10.0.0.2’); // Database server private IP

define(‘DB_NAME’, ‘production_db’);

define(‘DB_USER’, ‘app_user’);

define(‘DB_PASSWORD’, ‘secure_password’);

MySQL on the database server should bind to the private interface and accept connections only from the web server IP:

# /etc/mysql/mysql.conf.d/mysqld.cnf

bind-address = 10.0.0.2

# Grant access only from web server

# GRANT ALL ON production_db.* TO ‘app_user’@’10.0.0.1’ IDENTIFIED BY ‘password’;

Tier 2: Load-Balanced Web Tier

When a single web server is no longer sufficient, adding a second web server behind a load balancer distributes traffic and provides failover if one web server fails.

HAProxy is the standard open source load balancer for this configuration. It runs on a small server (or on the database server if resources permit) and distributes requests across the web tier:

global

    maxconn 50000

    log /dev/log local0

defaults

    mode http

    timeout connect 5s

    timeout client 30s

    timeout server 30s

    option httplog

frontend web_frontend

    bind *:80

    bind *:443 ssl crt /etc/ssl/certs/production.pem

    default_backend web_servers

backend web_servers

    balance roundrobin

    option httpchk GET /health

    server web1 10.0.0.1:80 check inter 2s

    server web2 10.0.0.2:80 check inter 2s

The option httpchk directive sends health check requests to /health on each web server every 2 seconds. A server that fails health checks is removed from rotation automatically. HAProxy’s configuration guide covers the full health check configuration including response code matching and failure thresholds.

Session state must live outside the web servers. When load balancing distributes requests across multiple web servers, each request may hit a different server. Session data stored in PHP’s default file-based session handler won’t be available on the other server. Store sessions in Redis on the database server:

# /etc/php/8.x/fpm/php.ini

session.save_handler = redis

session.save_path = “tcp://10.0.0.3:6379”

All web servers point to the same Redis instance. Any web server can serve any request, regardless of which server handled previous requests from the same user.

Tier 3: Database High Availability

Web tier redundancy without database redundancy leaves a single point of failure at the database layer. MySQL replication or clustering provides database-level redundancy.

MySQL Primary-Replica Replication is the simplest high availability configuration. The primary handles all writes; replicas receive changes via binlog replication and can handle read queries.

# Primary server my.cnf

[mysqld]

server-id = 1

log_bin = /var/log/mysql/mysql-bin.log

binlog_format = ROW

sync_binlog = 1

innodb_flush_log_at_trx_commit = 1

# Replica server my.cnf

[mysqld]

server-id = 2

relay-log = /var/log/mysql/relay-bin.log

read_only = 1

For automatic failover (promoting a replica to primary if the primary fails), Orchestrator is the standard tool for MySQL topology management. Orchestrator monitors replication topology and can execute automatic promotion, with integrations for Consul or ZooKeeper for DNS-based failover coordination.

MySQL InnoDB Cluster provides synchronous replication with automatic failover, at the cost of higher write latency (writes must be acknowledged by a quorum of nodes before committing). For applications where data loss on failover is unacceptable, InnoDB Cluster’s synchronous model provides stronger guarantees than asynchronous replication. MySQL’s Group Replication documentation covers setup and operational considerations.

Architecture Diagram: Three-Server Production Setup

[Load Balancer / HAProxy]

                         10.0.0.0:80,443

                        /               \

              [Web Server 1]         [Web Server 2]

               10.0.0.1               10.0.0.2

               Nginx + PHP            Nginx + PHP

                        \               /

                    [Database Server]

                         10.0.0.3

                    MySQL Primary + Redis

                         |

                    [DB Replica]

                         10.0.0.4

                    MySQL Replica

This configuration handles:

Web tier failure: HAProxy removes the failed web server; the remaining server handles all traffic

Database replica failure: Application continues writing to primary; replica reconnects and catches up

Database primary failure: Orchestrator promotes replica to primary; DNS updates point application to new primary

What it doesn’t handle: load balancer failure. Adding HAProxy redundancy with Keepalived (for VIP failover between two HAProxy instances) addresses the last single point of failure.

Shared File Storage Across Web Servers

Web applications that allow file uploads (images, documents, user-generated content) need those files accessible from all web servers. Files uploaded to web1 need to be readable from web2.

Three approaches, in order of complexity:

NFS mount: One server exports a directory via NFS; others mount it. Simple, but the NFS server becomes a single point of failure and I/O bottleneck at scale.

GlusterFS: A distributed filesystem that replicates data across multiple servers. More complex to configure, but eliminates the single point of failure.

Object storage with CDN front-end: Upload files directly to S3-compatible object storage (or InMotion’s backup storage as a staging area), serve via CDN. The cleanest architecture for new applications — no shared filesystem to manage.

For existing applications, NFS is often the fastest path to multi-server file access. For applications being designed for multi-server from the start, object storage with CDN delivery avoids a class of operational complexity entirely.

Planning the Progression

Multi-server architecture doesn’t need to be implemented all at once. The typical progression:

Start with a well-configured single server (InMotion Essential or Extreme depending on workload)

Separate database to its own server when RAM contention or I/O contention becomes measurable

Add a second web server and load balancer when CPU saturation is consistent

Add database replication when business requirements mandate reduced downtime risk

Add HAProxy redundancy when the load balancer itself becomes the last single point of failure

Each step adds cost and operational complexity. Move to the next tier when current constraints are measurable, not in anticipation of constraints you haven’t hit yet.

Related reading: Hybrid Infrastructure: Combining Dedicated + Cloud | Server Resource Monitoring & Performance Tuning



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *