Nginx Performance Tuning: Optimize Your Web Server for Maximum Speed

Nginx Performance Tuning: Optimize Your Web Server for Maximum Speed


A default Nginx installation works fine for small sites, but as traffic grows, you need to tune it. The right configuration can handle tens of thousands of concurrent connections on modest hardware. This guide covers the most impactful Nginx performance optimizations — from worker processes to gzip compression, caching, and security headers.

Worker Processes and Connections

The most important tuning starts with how Nginx handles processes and connections.

Worker Processes

# /etc/nginx/nginx.conf

# Set to the number of CPU cores
worker_processes auto;

# Pin workers to CPU cores (optional, for dedicated servers)
worker_cpu_affinity auto;

auto detects the number of CPU cores. On a 4-core server, this creates 4 worker processes. Each worker handles connections independently.

Check your CPU cores:

nproc
# or
grep -c ^processor /proc/cpuinfo

Worker Connections

events {
    worker_connections 4096;
    multi_accept on;
    use epoll;
}
  • worker_connections: Maximum simultaneous connections per worker. Total capacity = worker_processes × worker_connections
  • multi_accept on: Accept multiple connections at once instead of one at a time
  • use epoll: Use the efficient epoll event model on Linux

With 4 workers and 4096 connections each, Nginx can handle 16,384 concurrent connections.

File Descriptor Limits

Each connection uses a file descriptor. Increase the OS limit:

# Check current limit
ulimit -n

# Set in /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535

In Nginx:

worker_rlimit_nofile 65535;

HTTP Optimizations

Keepalive Connections

Keepalive reuses TCP connections instead of opening new ones for each request:

http {
    keepalive_timeout 65;
    keepalive_requests 1000;
}

For upstream (proxy) connections:

upstream backend {
    server 127.0.0.1:3000;
    keepalive 32;  # Keep 32 connections alive per worker
}

server {
    location /api/ {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";  # Required for upstream keepalive
    }
}

Sendfile and TCP Optimizations

http {
    sendfile on;        # Use kernel's sendfile() for static files
    tcp_nopush on;      # Send headers and beginning of file in one packet
    tcp_nodelay on;     # Don't buffer small packets (for keepalive)
}
  • sendfile: Bypasses user-space buffering for static files — significant speedup
  • tcp_nopush: Optimizes packet size when sending files
  • tcp_nodelay: Reduces latency for keepalive connections

Buffer Sizes

Tune buffers for your typical response sizes:

http {
    client_body_buffer_size 16k;
    client_header_buffer_size 1k;
    client_max_body_size 50m;
    large_client_header_buffers 4 8k;

    # Proxy buffers
    proxy_buffer_size 128k;
    proxy_buffers 4 256k;
    proxy_busy_buffers_size 256k;
}

Gzip Compression

Compression reduces bandwidth by 60-80% for text-based content:

http {
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_types
        text/plain
        text/css
        text/javascript
        application/javascript
        application/json
        application/xml
        application/rss+xml
        image/svg+xml
        font/woff2;
}
  • gzip_comp_level 4: Balance between compression ratio and CPU usage (1-9, 4-6 is optimal)
  • gzip_min_length 256: Don’t compress tiny responses
  • gzip_vary on: Tell proxies to cache compressed and uncompressed versions separately

Brotli Compression (Better Than Gzip)

If you have the Brotli module installed:

brotli on;
brotli_comp_level 4;
brotli_types text/plain text/css application/javascript application/json image/svg+xml;

Brotli provides ~20% better compression than gzip for web content.

Static File Caching

Browser Caching with Cache-Control

# Cache static assets for 1 year
location ~* \.(jpg|jpeg|png|gif|ico|svg|webp|woff|woff2|ttf|css|js)$ {
    expires 1y;
    add_header Cache-Control "public, immutable";
    access_log off;
}

# Don't cache HTML
location ~* \.html$ {
    expires -1;
    add_header Cache-Control "no-store, no-cache, must-revalidate";
}

Open File Cache

Cache file metadata (file descriptors, sizes, modification times):

http {
    open_file_cache max=10000 inactive=60s;
    open_file_cache_valid 120s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;
}

This avoids repeated filesystem lookups for frequently accessed files.

Proxy Caching

Cache upstream responses to reduce backend load:

http {
    # Define cache zone
    proxy_cache_path /var/cache/nginx 
        levels=1:2 
        keys_zone=my_cache:10m 
        max_size=10g 
        inactive=60m 
        use_temp_path=off;

    server {
        location /api/ {
            proxy_pass http://backend;
            proxy_cache my_cache;
            proxy_cache_valid 200 10m;
            proxy_cache_valid 404 1m;
            proxy_cache_use_stale error timeout updating;
            add_header X-Cache-Status $upstream_cache_status;
        }
    }
}
  • keys_zone=my_cache:10m: 10MB of shared memory for cache keys
  • max_size=10g: Maximum disk space for cached content
  • proxy_cache_use_stale: Serve stale content if backend is down

Check cache status with the X-Cache-Status header: HIT, MISS, EXPIRED, BYPASS.

Rate Limiting

Protect against abuse and DDoS:

http {
    # Define rate limit zones
    limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
    limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;

    server {
        # API: 10 requests/second with burst of 20
        location /api/ {
            limit_req zone=api burst=20 nodelay;
            proxy_pass http://backend;
        }

        # Login: 1 request/second with burst of 5
        location /auth/login {
            limit_req zone=login burst=5;
            proxy_pass http://backend;
        }
    }
}
  • rate=10r/s: 10 requests per second per IP
  • burst=20: Allow bursts up to 20 requests
  • nodelay: Process burst requests immediately (don’t queue)

Connection Limiting

http {
    limit_conn_zone $binary_remote_addr zone=perip:10m;

    server {
        limit_conn perip 100;  # Max 100 connections per IP
    }
}

Security Headers

Add security headers to every response:

server {
    # Prevent clickjacking
    add_header X-Frame-Options "SAMEORIGIN" always;

    # Prevent MIME type sniffing
    add_header X-Content-Type-Options "nosniff" always;

    # XSS protection
    add_header X-XSS-Protection "1; mode=block" always;

    # Referrer policy
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    # HSTS (force HTTPS for 1 year)
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    # Content Security Policy
    add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'" always;

    # Permissions policy
    add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
}

SSL/TLS Optimization

server {
    listen 443 ssl http2;

    # Use modern TLS only
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;

    # SSL session caching
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 1.1.1.1 8.8.8.8 valid=300s;
}

HTTP/2

HTTP/2 is enabled with listen 443 ssl http2. It provides:

  • Multiplexed requests over a single connection
  • Header compression
  • Server push

Logging Optimization

Reduce log I/O for high-traffic sites:

http {
    # Buffer log writes
    access_log /var/log/nginx/access.log combined buffer=64k flush=5m;

    # Disable access logging for static assets
    location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2)$ {
        access_log off;
    }

    # Conditional logging (only log errors)
    map $status $loggable {
        ~^[23] 0;
        default 1;
    }
    access_log /var/log/nginx/access.log combined if=$loggable;
}

Testing Configuration

# Test syntax
sudo nginx -t

# Reload without downtime
sudo nginx -s reload

# Check current connections
curl http://localhost/nginx_status
# (requires stub_status module)

Enable Status Page

server {
    location /nginx_status {
        stub_status on;
        allow 127.0.0.1;
        deny all;
    }
}

Benchmarking

# Test with wrk
wrk -t12 -c400 -d30s http://localhost/

# Test with ab (Apache Bench)
ab -n 10000 -c 100 http://localhost/

# Test with hey
hey -n 10000 -c 200 http://localhost/

Complete Optimized Configuration

worker_processes auto;
worker_rlimit_nofile 65535;

events {
    worker_connections 4096;
    multi_accept on;
    use epoll;
}

http {
    # Basic
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    keepalive_requests 1000;
    types_hash_max_size 2048;

    # Gzip
    gzip on;
    gzip_vary on;
    gzip_comp_level 4;
    gzip_min_length 256;
    gzip_proxied any;
    gzip_types text/plain text/css application/javascript application/json image/svg+xml;

    # File cache
    open_file_cache max=10000 inactive=60s;
    open_file_cache_valid 120s;
    open_file_cache_min_uses 2;

    # Logging
    access_log /var/log/nginx/access.log combined buffer=64k;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

Summary

Most Nginx performance comes from a few key optimizations: proper worker configuration, gzip compression, static file caching, and keepalive connections. Start with these, measure with benchmarking tools, and tune further as needed.

Key resources:

Test with nginx -t, reload with nginx -s reload, and benchmark to verify improvements.