“Never trust, always verify” is a useful principle. On bare metal servers, it’s also an implementation challenge that most hosting guides skip over. The zero trust model was developed to address the failure of perimeter-based security — the assumption that anything inside the network boundary is trustworthy. That assumption breaks down in every real infrastructure…
Why Traditional Perimeter Security Fails on Dedicated Infrastructure
A typical dedicated server sits behind a firewall that allows traffic from specific ports. Once traffic reaches the server, internal services often communicate with each other without additional authentication. MySQL listens on 3306 and accepts connections from the local network. Redis is accessible to any process running on the server. Application code runs with broad filesystem permissions.
This works fine until something inside the perimeter is compromised. A web shell uploaded through a vulnerable WordPress plugin can now reach MySQL directly. A compromised application process can read files belonging to other applications. The perimeter held; the interior didn’t.
Zero trust addresses this by removing the concept of “trusted internal” entirely. Every access request — whether from an external user or an internal service — is authenticated, authorized, and logged.
Identity-Based Access Control for Services
The foundation of zero trust at the service level is ensuring that services authenticate to each other, not just to external users.
Database access: MySQL should not accept connections from 127.0.0.1 without credentials scoped to the minimum necessary permissions. Create application-specific database users rather than using root:
— Create a user for the application with only required privileges
CREATE USER ‘appname’@’127.0.0.1’ IDENTIFIED BY ‘strong_random_password’;
GRANT SELECT, INSERT, UPDATE, DELETE ON appname_db.* TO ‘appname’@’127.0.0.1’;
FLUSH PRIVILEGES;
— Verify privileges
SHOW GRANTS FOR ‘appname’@’127.0.0.1’;
The web application connects as appname and can only access appname_db. Even if this credential is exposed, the blast radius is limited to one database.
Redis access: Redis by default accepts all connections without authentication on localhost. Enable authentication in /etc/redis/redis.conf:
requirepass your_strong_redis_password
bind 127.0.0.1
With a strong password and binding to loopback only, Redis connections require both network proximity and the correct credential.
Network Segmentation with Namespaces and VLANs
For multi-application environments on a single dedicated server, Linux network namespaces provide application-level network isolation without requiring separate hardware:
# Create an isolated network namespace for an application
ip netns add appname_ns
# Create a veth pair (virtual ethernet cable)
ip link add veth0 type veth peer name veth1
# Move one end into the namespace
ip link set veth1 netns appname_ns
# Configure addressing
ip addr add 192.168.100.1/30 dev veth0
ip netns exec appname_ns ip addr add 192.168.100.2/30 dev veth1
# Bring interfaces up
ip link set veth0 up
ip netns exec appname_ns ip link set veth1 up
Processes running within the namespace can only reach the network addresses explicitly configured for them. They cannot directly access databases or services bound to the host network without passing through a controlled gateway.
For simpler multi-tenant isolation, nftables rules can enforce communication policies between applications on the same server:
# Only allow MySQL connections from the application’s specific process user (via UID match)
nft add rule inet filter output skuid 1001 tcp dport 3306 accept
nft add rule inet filter output tcp dport 3306 drop
This allows only processes running as UID 1001 (the application user) to connect to MySQL — all other processes are blocked at the kernel level.
Micro-Segmentation for Intra-Server Traffic
AppArmor (Ubuntu/Debian) and SELinux (RHEL/AlmaLinux/Rocky Linux) provide mandatory access control at the kernel level, restricting what files, network resources, and system calls a process can access regardless of Unix permissions.
An AppArmor profile for Nginx that restricts it to only the resources it needs:
/etc/apparmor.d/usr.sbin.nginx:
#include <tunables/global>
/usr/sbin/nginx {
#include <abstractions/base>
#include <abstractions/nameservice>
capability net_bind_service,
capability setuid,
capability setgid,
/var/www/** r,
/etc/nginx/** r,
/var/log/nginx/** w,
/run/nginx.pid rw,
# Deny everything else
deny /home/** rwx,
deny /root/** rwx,
deny /etc/shadow r,
}
With this profile enforced, even if an attacker achieves code execution within the Nginx process, they cannot read /etc/shadow, access user home directories, or write outside of /var/log/nginx/. The kernel enforces these constraints regardless of what the attacker’s code attempts.
AppArmor documentation covers profile development and enforcement modes. Start in complain mode (logging violations without blocking) to verify your profile before switching to enforce.
Zero Trust Access for Administrative Access
Applying zero trust to SSH access means replacing static credentials with short-lived, identity-verified certificates.
HashiCorp Vault SSH Certificate Authority issues SSH certificates that expire after a configurable duration — 30 minutes, 1 hour, 8 hours. An engineer authenticates to Vault with their identity credentials, receives a short-lived SSH certificate, and uses it to connect to the server. If the certificate is stolen, it expires shortly. If the engineer leaves the organization, revoking their Vault access immediately ends their ability to obtain new certificates.
Vault’s SSH secrets engine documentation covers setup for both server-side verification and client certificate issuance.
For teams not ready to deploy Vault, a simpler zero trust improvement for SSH is IP allowlisting combined with certificate rotation:
# In /etc/ssh/sshd_config
# Match only connections from corporate VPN or jump host IP
Match Address 10.0.0.0/8
PasswordAuthentication no
PubkeyAuthentication yes
Match Address *
DenyUsers *
Logging and Continuous Verification
Zero trust without logging is just hope. Every access decision needs an audit trail. For a dedicated server:
SSH access logging: Confirm sshd logs to /var/log/auth.log (Debian) or /var/log/secure (RHEL). Every login attempt, successful or failed, with source IP and username.
Application-level audit logging: Ensure your application logs authenticated user actions, not just requests. Log the identity of who performed each operation, not just that the operation occurred.
Centralized log shipping: Log data stored only on the compromised server can be deleted by an attacker. Ship logs to a remote syslog receiver or cloud logging service that the server cannot write-delete to.
Periodic access review: Monthly review of all active SSH keys in /root/.ssh/authorized_keys and each user’s ~/.ssh/authorized_keys. Remove keys belonging to former employees, former contractors, or systems that no longer need access.
Zero Trust Is a Continuous Process, Not a Deployment
The organizations with the strongest security posture on dedicated infrastructure didn’t deploy zero trust in a weekend. They started with the highest-risk access paths — SSH, database connections — and added identity verification and logging there first. Then they moved inward, hardening service-to-service communication and process-level access controls.
InMotion’s Premier Care managed service includes the foundational security configuration appropriate for a production dedicated server. Teams operating under strict compliance requirements or threat models — financial services, healthcare, regulated data — typically layer additional zero trust controls on top of that baseline.
Related reading: Server Hardening Best Practices | DDoS Protection Strategies for Dedicated Infrastructure
