How to Install and Configure Nginx on Linux

Step-by-step guide to install and configure Nginx on Linux: apt or yum install, enable and start service, create server blocks, set document root, adjust firewall, test and restart

How to Install and Configure Nginx on Linux
SPONSORED

Sponsor message — This article is made possible by Dargslan.com, a publisher of practical, no-fluff IT & developer workbooks.

Why Dargslan.com?

If you prefer doing over endless theory, Dargslan’s titles are built for you. Every workbook focuses on skills you can apply the same day—server hardening, Linux one-liners, PowerShell for admins, Python automation, cloud basics, and more.


How to Install and Configure Nginx on Linux

In today's digital landscape, web server performance directly impacts user experience, search engine rankings, and ultimately, business success. Whether you're launching a personal blog, deploying a complex web application, or managing enterprise-level infrastructure, the web server you choose becomes the foundation of your online presence. Poor server configuration leads to slow page loads, security vulnerabilities, and frustrated visitors who won't return. Understanding how to properly set up and optimize your web server isn't just a technical skill—it's a business necessity that separates professional deployments from amateur attempts.

Nginx (pronounced "engine-x") represents one of the most powerful, lightweight, and flexible web servers available today. Originally created to solve the C10K problem—handling ten thousand concurrent connections—this open-source software has evolved into a comprehensive solution for serving static content, reverse proxying, load balancing, and HTTP caching. This guide approaches Nginx installation and configuration from multiple perspectives: the system administrator seeking reliability, the developer optimizing for performance, and the security professional hardening infrastructure against threats.

Throughout this comprehensive walkthrough, you'll gain practical knowledge for installing Nginx across different Linux distributions, understanding its architectural philosophy, configuring server blocks for multiple websites, implementing SSL/TLS encryption, optimizing performance parameters, and troubleshooting common issues. You'll discover not just the "how" but the "why" behind each configuration decision, empowering you to adapt these principles to your specific requirements. By the end, you'll possess the confidence to deploy production-ready Nginx servers that deliver content efficiently, securely, and reliably.

Understanding Nginx Architecture and Use Cases

Before diving into installation procedures, grasping Nginx's fundamental architecture helps you make informed configuration decisions. Unlike traditional web servers that create a new process or thread for each connection, Nginx employs an asynchronous, event-driven architecture. This design allows a single worker process to handle thousands of concurrent connections with minimal memory overhead. The master process reads configuration files and manages worker processes, while worker processes handle actual client requests. This separation provides graceful configuration reloads without dropping connections—a critical feature for high-availability environments.

Organizations deploy Nginx in various capacities beyond simple web serving. As a reverse proxy, it sits between clients and backend application servers, distributing requests, caching responses, and providing an additional security layer. As a load balancer, it distributes traffic across multiple backend servers using algorithms like round-robin, least connections, or IP hash. As an HTTP cache, it stores frequently requested content in memory or on disk, dramatically reducing backend server load. Understanding these roles helps you architect solutions that leverage Nginx's strengths appropriately.

"The event-driven model fundamentally changes how we think about web server scalability. Traditional approaches hit walls; asynchronous architectures break through them."

The choice between Nginx and alternatives like Apache depends on specific requirements. Nginx excels at serving static content, handling concurrent connections efficiently, and functioning as a reverse proxy. Apache offers more mature module ecosystems and per-directory configuration through .htaccess files. Many modern infrastructures use both: Nginx as a frontend proxy handling static content and SSL termination, with Apache serving dynamic content behind it. This hybrid approach combines the strengths of both systems while mitigating their respective weaknesses.

Preparing Your Linux Environment

Successful Nginx deployment begins with proper system preparation. First, ensure your Linux distribution is updated with the latest security patches. Different distributions require different approaches, but the principle remains consistent: start with a clean, updated foundation. For Ubuntu and Debian systems, run sudo apt update && sudo apt upgrade. For CentOS, RHEL, and Fedora, use sudo yum update or sudo dnf update depending on your version. This step prevents compatibility issues and ensures you're not building on a vulnerable base system.

Consider your installation source carefully. Most Linux distributions include Nginx in their official repositories, providing easy installation but often featuring older versions. The official Nginx repository offers the latest stable releases with recent features and security patches. For production environments, using official Nginx repositories typically provides the best balance between stability and currency. Development environments might benefit from compiling from source to enable specific modules or optimizations, though this approach requires more maintenance effort.

System resource planning impacts long-term performance. Nginx itself consumes minimal resources—a properly configured instance can run comfortably on systems with 512MB RAM. However, your specific use case determines actual requirements. A simple static site server needs far less than a reverse proxy handling SSL termination for multiple backend applications. Before installation, document your expected traffic patterns, concurrent connection requirements, and whether you'll implement caching. These factors influence configuration decisions you'll make after installation completes.

Distribution-Specific Prerequisites

Ubuntu and Debian systems require the curl, gnupg2, and lsb-release packages for adding official repositories. Install these with sudo apt install curl gnupg2 ca-certificates lsb-release. CentOS and RHEL systems need the yum-utils package for repository management. Fedora users should ensure dnf-plugins-core is installed. These utilities enable you to add external repositories securely, verifying package signatures to prevent tampering.

Firewall configuration deserves attention before installation. Most Linux distributions now include firewalld or ufw (Uncomplicated Firewall) by default. You'll need to allow HTTP (port 80) and HTTPS (port 443) traffic. For ufw on Ubuntu: sudo ufw allow 'Nginx Full'. For firewalld on CentOS: sudo firewall-cmd --permanent --add-service=http && sudo firewall-cmd --permanent --add-service=https && sudo firewall-cmd --reload. Configuring firewall rules before installation prevents the frustrating scenario where Nginx runs perfectly but remains inaccessible due to blocked ports.

Distribution Package Manager Update Command Firewall Tool
Ubuntu 20.04/22.04 apt sudo apt update && sudo apt upgrade ufw
Debian 10/11 apt sudo apt update && sudo apt upgrade ufw/iptables
CentOS 7/8 yum/dnf sudo yum update / sudo dnf update firewalld
RHEL 8/9 dnf sudo dnf update firewalld
Fedora 36+ dnf sudo dnf update firewalld

Installing Nginx from Official Repositories

The official Nginx repositories provide the most reliable installation path for production systems. This approach ensures you receive timely security updates while maintaining compatibility with your Linux distribution. The process varies slightly across distributions but follows a consistent pattern: add the repository, update package lists, and install Nginx. Let's walk through each major distribution family with detailed commands and explanations.

Installation on Ubuntu and Debian

For Ubuntu and Debian systems, begin by importing the Nginx signing key to verify package authenticity. Execute curl https://nginx.org/keys/nginx_signing.key | gpg --dearmor | sudo tee /usr/share/keyrings/nginx-archive-keyring.gpg >/dev/null. This command downloads the official signing key, converts it to the proper format, and stores it where apt can reference it. Next, add the repository to your sources list. For Ubuntu, create a new file: echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/ubuntu `lsb_release -cs` nginx" | sudo tee /etc/apt/sources.list.d/nginx.list. The backticks around lsb_release -cs automatically insert your Ubuntu codename (focal, jammy, etc.), ensuring you get packages built for your specific version.

After adding the repository, update your package index with sudo apt update. You should see the Nginx repository in the update output. Now install Nginx: sudo apt install nginx. The package manager downloads Nginx and its dependencies, automatically configuring the service to start at boot. Verify the installation by checking the version: nginx -v. You should see output indicating the installed version, confirming successful installation.

Start the Nginx service with sudo systemctl start nginx and enable it to launch automatically at boot: sudo systemctl enable nginx. Check the service status with sudo systemctl status nginx. You should see "active (running)" in green, indicating Nginx is operational. Open a web browser and navigate to your server's IP address. You should see the default Nginx welcome page—a simple HTML page confirming that Nginx is serving content successfully.

Installation on CentOS, RHEL, and Fedora

Red Hat-based distributions follow a similar pattern with syntax variations. First, create a repository configuration file: sudo vi /etc/yum.repos.d/nginx.repo. Add the following content, adjusting for your specific distribution:

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

For RHEL, replace "centos" with "rhel" in the baseurl. For Fedora, replace it with "fedora". The $releasever and $basearch variables automatically populate with your system's version and architecture. Save and close the file. Install Nginx with sudo yum install nginx (CentOS 7) or sudo dnf install nginx (CentOS 8+, RHEL 8+, Fedora). The package manager handles dependencies and installation automatically.

"Repository configuration might seem tedious, but it's the difference between a maintainable system and a future nightmare. Take the time to do it right."

Start and enable Nginx identically to Ubuntu: sudo systemctl start nginx && sudo systemctl enable nginx. On CentOS and RHEL, SELinux may prevent Nginx from functioning correctly by default. If you encounter permission errors, you'll need to adjust SELinux policies. Allow Nginx to make network connections: sudo setsebool -P httpd_can_network_connect 1. For serving content from non-standard directories, adjust file contexts: sudo semanage fcontext -a -t httpd_sys_content_t "/your/custom/path(/.*)?" followed by sudo restorecon -Rv /your/custom/path. These commands ensure SELinux security while allowing Nginx to function properly.

Understanding Nginx Configuration Structure

Nginx configuration follows a hierarchical, context-based structure that initially appears complex but becomes intuitive with understanding. The main configuration file resides at /etc/nginx/nginx.conf. This file defines global settings and includes additional configuration files from /etc/nginx/conf.d/ or /etc/nginx/sites-enabled/ depending on your distribution. The configuration uses a simple syntax: directives (setting names) followed by values, terminated with semicolons. Directives are organized into blocks (contexts) defined by curly braces.

The main context contains directives that affect the entire Nginx instance: worker processes, error log location, and pid file path. The events context configures connection processing: worker connections, connection processing methods. The http context contains directives for HTTP/HTTPS traffic handling: MIME types, default character sets, logging formats, and includes for virtual host configurations. Within the http context, server blocks define individual websites or applications, each with its own domain, port, and content location. Within server blocks, location blocks define how specific URI patterns are processed.

This nested structure allows configuration inheritance with specific overrides. A directive set in the http context applies to all server blocks unless overridden within a specific server block. Similarly, server-level directives apply to all locations within that server unless overridden. This inheritance model promotes DRY (Don't Repeat Yourself) principles—define common settings once at higher levels, override only where necessary. Understanding this hierarchy prevents configuration errors and makes troubleshooting more straightforward.

Essential Configuration Directives

Several directives appear in virtually every Nginx configuration. The worker_processes directive determines how many worker processes Nginx spawns. Setting this to auto lets Nginx detect available CPU cores and create one worker per core—generally the optimal configuration. The worker_connections directive (in the events context) sets the maximum number of simultaneous connections each worker can handle. A value of 1024 works well for most scenarios; high-traffic sites might increase this to 2048 or 4096.

The listen directive within server blocks specifies which IP addresses and ports the server block responds to. listen 80; listens on all IPv4 addresses on port 80. listen [::]:80; adds IPv6 support. The server_name directive defines which domain names this server block handles: server_name example.com www.example.com;. When a request arrives, Nginx matches the Host header against server_name directives to determine which server block processes the request.

The root directive specifies the document root—the filesystem path where Nginx looks for files to serve. root /var/www/html; tells Nginx to serve files from that directory. When a request arrives for /images/logo.png, Nginx looks for /var/www/html/images/logo.png. The index directive defines default files to serve when a directory is requested: index index.html index.htm;. If someone requests /about/, Nginx looks for /var/www/html/about/index.html, then /var/www/html/about/index.htm.

Directive Context Purpose Example Value
worker_processes main Number of worker processes auto
worker_connections events Max connections per worker 1024
listen server IP address and port binding 80, 443 ssl
server_name server Domain name matching example.com www.example.com
root http, server, location Document root directory /var/www/html
index http, server, location Default directory index files index.html index.htm
access_log http, server, location Access log file location /var/log/nginx/access.log
error_log main, http, server, location Error log file and level /var/log/nginx/error.log warn

Configuring Server Blocks for Virtual Hosts

Server blocks (called virtual hosts in Apache terminology) allow a single Nginx instance to serve multiple websites or applications. Each server block defines a separate configuration: domain name, document root, logging, SSL certificates, and request handling rules. This capability makes Nginx extremely efficient for hosting multiple sites on a single server, whether for different customers, different projects, or staging versus production environments.

The standard approach involves creating individual configuration files for each site in /etc/nginx/conf.d/ (CentOS/RHEL) or /etc/nginx/sites-available/ (Ubuntu/Debian). On Ubuntu/Debian systems, you create configurations in sites-available, then create symbolic links in sites-enabled to activate them. This pattern allows you to maintain configurations for inactive sites without deleting them—simply remove the symlink to disable a site without losing its configuration. CentOS/RHEL systems typically place active configurations directly in conf.d, using the .conf extension.

Creating Your First Server Block

Let's create a server block for a hypothetical website at example.com. First, create the document root directory: sudo mkdir -p /var/www/example.com/html. Set appropriate permissions: sudo chown -R $USER:$USER /var/www/example.com/html and sudo chmod -R 755 /var/www/example.com. These permissions allow your user account to create and modify files while ensuring Nginx can read them.

Create a simple test page: nano /var/www/example.com/html/index.html and add basic HTML content. Now create the server block configuration. On Ubuntu/Debian: sudo nano /etc/nginx/sites-available/example.com. On CentOS/RHEL: sudo nano /etc/nginx/conf.d/example.com.conf. Add the following configuration:

server {
    listen 80;
    listen [::]:80;
    
    server_name example.com www.example.com;
    
    root /var/www/example.com/html;
    index index.html index.htm;
    
    access_log /var/log/nginx/example.com.access.log;
    error_log /var/log/nginx/example.com.error.log;
    
    location / {
        try_files $uri $uri/ =404;
    }
}

This configuration creates a server block that listens on port 80 for both IPv4 and IPv6, responds to requests for example.com and www.example.com, serves files from the specified root directory, and logs access and errors to dedicated files. The try_files directive attempts to serve the requested URI as a file, then as a directory, and finally returns a 404 error if neither exists. This directive prevents directory traversal attacks while providing clean error handling.

"Separate log files for each virtual host aren't just organizational niceties—they're essential for troubleshooting, analytics, and security monitoring in production environments."

On Ubuntu/Debian systems, enable the site by creating a symbolic link: sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/. Test your configuration for syntax errors: sudo nginx -t. This command parses all configuration files and reports any errors without actually reloading Nginx. If the test succeeds, reload Nginx to apply changes: sudo systemctl reload nginx. Using reload instead of restart keeps existing connections alive while applying the new configuration to new connections.

Advanced Server Block Configurations

Real-world deployments often require more sophisticated configurations. Redirecting www to non-www (or vice versa) improves SEO by consolidating your site under a single canonical domain. Create a separate server block for the redirect:

server {
    listen 80;
    listen [::]:80;
    server_name www.example.com;
    return 301 http://example.com$request_uri;
}

This configuration catches requests for www.example.com and returns a permanent redirect (301) to the non-www version, preserving the request URI. The $request_uri variable contains the full original request URI including query strings, ensuring users land on the correct page after redirection. Place this server block before your main server block in the configuration file.

For serving static assets efficiently, create a specific location block with optimized settings:

location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2)$ {
    expires 30d;
    add_header Cache-Control "public, immutable";
    access_log off;
}

This location block matches requests for common static file types using a case-insensitive regular expression. It sets browser cache expiration to 30 days, adds cache control headers for optimal client-side caching, and disables access logging for these requests to reduce disk I/O. These optimizations significantly improve performance for sites with substantial static content.

Implementing SSL/TLS Encryption

Modern web security demands encrypted connections for all websites, not just those handling sensitive data. SSL/TLS encryption protects user privacy, prevents man-in-the-middle attacks, improves search engine rankings (Google considers HTTPS a ranking signal), and enables modern web features that browsers restrict to secure contexts. Implementing SSL/TLS on Nginx involves obtaining certificates, configuring server blocks to use them, and optimizing SSL/TLS settings for security and performance.

Let's Encrypt revolutionized SSL/TLS by providing free, automated certificates trusted by all major browsers. Certbot, the official Let's Encrypt client, automates certificate issuance and renewal. Install Certbot on Ubuntu/Debian: sudo apt install certbot python3-certbot-nginx. On CentOS/RHEL: sudo yum install certbot python3-certbot-nginx or sudo dnf install certbot python3-certbot-nginx. The python3-certbot-nginx package includes an Nginx plugin that automatically configures your server blocks.

Obtaining and Installing SSL Certificates

Before running Certbot, ensure your domain's DNS records point to your server's IP address. Let's Encrypt validates domain ownership by making HTTP requests to your server, so DNS must resolve correctly. Run Certbot with the Nginx plugin: sudo certbot --nginx -d example.com -d www.example.com. Certbot prompts for an email address (for renewal notifications), asks you to agree to terms of service, and optionally asks about sharing your email with the Electronic Frontier Foundation.

Certbot then validates domain ownership, obtains certificates, and automatically modifies your Nginx configuration to use them. It adds listen 443 ssl; directives, specifies certificate file paths, and optionally configures HTTP to HTTPS redirection. Review the changes Certbot made: sudo cat /etc/nginx/sites-available/example.com (Ubuntu/Debian) or sudo cat /etc/nginx/conf.d/example.com.conf (CentOS/RHEL). You'll see new directives like:

listen 443 ssl;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

The ssl_certificate directive points to the full certificate chain (your certificate plus intermediate certificates). The ssl_certificate_key directive points to your private key. The included options-ssl-nginx.conf file contains Mozilla's recommended SSL/TLS settings. The ssl_dhparam directive specifies Diffie-Hellman parameters for enhanced security. Certbot handles all these details automatically, but understanding them helps with troubleshooting and manual configurations.

"SSL/TLS isn't optional anymore. It's not about whether you handle sensitive data—it's about respecting user privacy and meeting modern web standards."

Let's Encrypt certificates expire after 90 days, but Certbot automatically installs a renewal timer. Check the timer status: sudo systemctl status certbot.timer (Ubuntu/Debian) or sudo systemctl status certbot-renew.timer (CentOS/RHEL). Test the renewal process: sudo certbot renew --dry-run. This command simulates renewal without actually requesting new certificates, verifying that the process will work when needed. If the dry run succeeds, you can trust that automatic renewal will function correctly.

Optimizing SSL/TLS Configuration

While Certbot's default SSL/TLS configuration provides good security, you can optimize further for your specific requirements. Create a custom SSL configuration snippet: sudo nano /etc/nginx/snippets/ssl-params.conf and add:

ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;

ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;

add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;

This configuration disables outdated SSL protocols (keeping only TLS 1.2 and 1.3), specifies strong cipher suites, enables session caching to improve performance, activates OCSP stapling for faster certificate validation, and adds security headers. The Strict-Transport-Security header (HSTS) tells browsers to always use HTTPS for your domain. The other headers protect against clickjacking, MIME-type sniffing, and XSS attacks. Include this snippet in your server blocks: include snippets/ssl-params.conf;.

Test your SSL/TLS configuration using SSL Labs' SSL Server Test (ssllabs.com/ssltest). This free tool analyzes your configuration and assigns a grade (A+ being the highest). It identifies vulnerabilities, weak ciphers, and configuration issues. Aim for an A or A+ rating. Common issues preventing top ratings include: missing HSTS headers, weak Diffie-Hellman parameters, or supporting outdated protocols. The tool provides specific recommendations for improvement, making it invaluable for security hardening.

Performance Optimization Techniques

Out-of-the-box Nginx performs well, but production deployments benefit significantly from targeted optimizations. Performance tuning involves balancing multiple factors: connection handling, caching strategies, compression, static file serving, and resource limits. The optimal configuration depends on your specific workload—a static content server requires different tuning than a reverse proxy for dynamic applications.

Connection and Worker Process Optimization

The worker_processes directive fundamentally impacts performance. Setting it to auto lets Nginx create one worker per CPU core, generally optimal for most scenarios. However, if your server runs other resource-intensive applications, you might set a specific number: worker_processes 2; on a 4-core server reserves resources for other processes. The worker_connections directive determines maximum simultaneous connections per worker. Calculate total capacity: worker_processes × worker_connections. A server with 4 workers and 1024 connections per worker handles 4,096 simultaneous connections.

The multi_accept directive controls whether workers accept multiple connections simultaneously. Setting multi_accept on; in the events context improves performance under high load by allowing workers to accept all pending connections at once rather than one at a time. The use directive specifies the connection processing method. On Linux, use epoll; provides the best performance. Nginx usually selects the optimal method automatically, but explicitly setting it ensures consistency across environments.

events {
    worker_connections 2048;
    multi_accept on;
    use epoll;
}

Enabling Gzip Compression

Gzip compression dramatically reduces bandwidth usage and improves page load times, especially for text-based content like HTML, CSS, and JavaScript. Add these directives to your http context:

gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml+rss application/rss+xml font/truetype font/opentype application/vnd.ms-fontobject image/svg+xml;
gzip_disable "msie6";

The gzip_comp_level ranges from 1 (fastest, least compression) to 9 (slowest, most compression). Level 6 provides excellent compression with reasonable CPU usage. The gzip_types directive specifies which MIME types to compress—never compress already-compressed formats like images or videos. The gzip_vary directive adds the Vary: Accept-Encoding header, helping caches understand that content varies based on whether clients accept compression. The gzip_disable directive prevents compression for Internet Explorer 6, which had buggy gzip support.

Implementing Browser Caching

Browser caching reduces server load and improves user experience by storing static assets locally. Configure cache expiration based on content change frequency. Static assets like logos rarely change—set long expiration times. CSS and JavaScript might change with deployments—use moderate expiration times. HTML pages often change frequently—use short expiration times or no caching.

location ~* \.(jpg|jpeg|png|gif|ico|svg)$ {
    expires 1y;
    add_header Cache-Control "public, immutable";
}

location ~* \.(css|js)$ {
    expires 1M;
    add_header Cache-Control "public";
}

location ~* \.(woff|woff2|ttf|otf|eot)$ {
    expires 1y;
    add_header Cache-Control "public, immutable";
    add_header Access-Control-Allow-Origin "*";
}

The immutable directive tells browsers that the file will never change, allowing aggressive caching. Font files include CORS headers (Access-Control-Allow-Origin) because browsers enforce strict origin policies for fonts. These caching strategies significantly reduce bandwidth and server load for returning visitors.

"Performance optimization isn't about applying every technique indiscriminately—it's about understanding your workload and applying appropriate optimizations where they matter most."

FastCGI Caching for Dynamic Content

For dynamic applications (PHP, Python, etc.), FastCGI caching stores generated pages, serving them directly without executing backend code for every request. Configure a cache zone in the http context:

fastcgi_cache_path /var/cache/nginx/fastcgi levels=1:2 keys_zone=FASTCGICACHE:100m inactive=60m max_size=1g;
fastcgi_cache_key "$scheme$request_method$host$request_uri";
fastcgi_cache_use_stale error timeout invalid_header http_500;
fastcgi_ignore_headers Cache-Control Expires Set-Cookie;

This configuration creates a cache directory with a two-level hierarchy (improves filesystem performance), allocates 100MB for cache keys, removes cached items inactive for 60 minutes, and limits total cache size to 1GB. The fastcgi_cache_key defines how cache entries are identified—this example creates unique entries per scheme (HTTP/HTTPS), method (GET/POST), host, and URI. The fastcgi_cache_use_stale directive serves stale cached content if the backend is unavailable, improving resilience.

Enable caching in location blocks that proxy to FastCGI applications:

location ~ \.php$ {
    fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    
    fastcgi_cache FASTCGICACHE;
    fastcgi_cache_valid 200 60m;
    fastcgi_cache_valid 404 10m;
    fastcgi_cache_bypass $http_pragma $http_authorization;
    fastcgi_no_cache $http_pragma $http_authorization;
    
    add_header X-FastCGI-Cache $upstream_cache_status;
}

This configuration caches successful responses (200) for 60 minutes and 404 errors for 10 minutes. It bypasses caching for requests with Pragma or Authorization headers (typically dynamic or authenticated requests). The X-FastCGI-Cache header shows cache status (HIT, MISS, BYPASS) in responses, useful for debugging. Create the cache directory: sudo mkdir -p /var/cache/nginx/fastcgi && sudo chown www-data:www-data /var/cache/nginx/fastcgi (Ubuntu/Debian) or replace www-data with nginx (CentOS/RHEL).

Configuring Nginx as a Reverse Proxy

Reverse proxy configurations allow Nginx to sit in front of application servers, handling SSL termination, load balancing, caching, and serving static files while passing dynamic requests to backend applications. This architecture separates concerns: Nginx excels at handling connections and serving static content; application servers focus on business logic. The result: improved performance, security, and scalability.

Basic Reverse Proxy Setup

Suppose you have a Node.js application running on localhost:3000. Configure Nginx to proxy requests to it:

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        proxy_cache_bypass $http_upgrade;
    }
}

The proxy_pass directive specifies the backend server. The proxy_http_version directive ensures HTTP/1.1 is used (required for WebSocket support). The proxy_set_header directives pass important information to the backend: the original host, client IP address, and protocol. Without these headers, the backend application sees all requests coming from localhost and can't determine the original client IP or whether the connection was HTTPS. The X-Forwarded-For header is particularly important for logging and security—it contains the original client IP.

For applications requiring WebSocket support (real-time features), the Upgrade and Connection headers enable protocol switching. The proxy_cache_bypass directive prevents caching of WebSocket connections. Test the configuration and reload Nginx. Your application should now be accessible through Nginx, with Nginx handling the HTTP layer while your application focuses on business logic.

Load Balancing Multiple Backend Servers

For high-availability and scalability, distribute requests across multiple backend servers. Define an upstream block in the http context:

upstream backend_servers {
    least_conn;
    
    server backend1.example.com:3000 weight=3;
    server backend2.example.com:3000 weight=2;
    server backend3.example.com:3000 weight=1 backup;
    
    keepalive 32;
}

This upstream block defines three backend servers with different weights (higher weight receives more requests). The least_conn directive uses the least-connections algorithm—Nginx sends requests to the server with the fewest active connections. Other algorithms include ip_hash (routes clients to the same server based on IP address, useful for session persistence) and round-robin (default, distributes requests evenly). The backup parameter marks backend3 as a backup—it only receives requests if other servers are unavailable. The keepalive directive maintains persistent connections to backend servers, reducing connection overhead.

Update your server block to use the upstream:

location / {
    proxy_pass http://backend_servers;
    proxy_http_version 1.1;
    proxy_set_header Connection "";
    
    # ... other proxy headers ...
}

The proxy_set_header Connection ""; directive clears the Connection header, allowing keepalive connections to backend servers. This configuration automatically distributes load across available backends, improving both performance and reliability. If a backend server fails, Nginx automatically routes requests to remaining servers.

Health Checks and Failover

Nginx Plus (commercial version) includes active health checks, but open-source Nginx provides passive health checking. Configure passive checks in your upstream block:

upstream backend_servers {
    server backend1.example.com:3000 max_fails=3 fail_timeout=30s;
    server backend2.example.com:3000 max_fails=3 fail_timeout=30s;
}

The max_fails parameter specifies how many failed connection attempts mark a server as unavailable. The fail_timeout parameter defines how long a server remains marked unavailable before Nginx retries it. With these settings, if a backend server fails three times within 30 seconds, Nginx stops sending requests to it for 30 seconds, then retries. This passive approach prevents cascading failures while maintaining high availability.

"Reverse proxy configurations transform Nginx from a simple web server into a sophisticated application delivery platform capable of enterprise-grade traffic management."

Security Hardening Best Practices

Security requires multiple layers of defense. While SSL/TLS encrypts traffic, additional measures protect against various attack vectors. Implement these practices to harden your Nginx installation against common threats.

Hiding Version Information

By default, Nginx includes version information in error pages and response headers. This information helps attackers identify specific vulnerabilities. Disable version disclosure by adding server_tokens off; to your http context. This directive removes version numbers from error pages and the Server header, making reconnaissance more difficult for potential attackers.

Rate Limiting and DDoS Protection

Rate limiting prevents abuse by restricting request rates from individual clients. Define a rate limit zone in the http context:

limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=3r/m;

This configuration creates two zones: "general" allows 10 requests per second, "login" allows 3 requests per minute. The $binary_remote_addr variable tracks requests by client IP address. The zone size (10m) determines how many unique IPs can be tracked. Apply rate limiting in server or location blocks:

location / {
    limit_req zone=general burst=20 nodelay;
}

location /login {
    limit_req zone=login burst=5;
}

The burst parameter allows temporary spikes above the rate limit. The nodelay parameter processes burst requests immediately rather than queuing them. For the login endpoint, omitting nodelay queues excessive requests, slowing down brute-force attacks. Rate limiting significantly reduces the impact of DDoS attacks and prevents resource exhaustion.

Restricting HTTP Methods

Most websites only need GET, POST, and HEAD methods. Disable others to reduce attack surface:

location / {
    if ($request_method !~ ^(GET|POST|HEAD)$) {
        return 405;
    }
}

This configuration returns a 405 Method Not Allowed error for requests using methods other than GET, POST, or HEAD. While if directives are generally discouraged in Nginx configurations (they can cause unexpected behavior), this specific use case is safe and effective for method filtering.

Preventing Hotlinking

Hotlinking occurs when other websites embed your images or files, consuming your bandwidth without permission. Prevent it with referrer checking:

location ~* \.(jpg|jpeg|png|gif)$ {
    valid_referers none blocked example.com www.example.com;
    if ($invalid_referer) {
        return 403;
    }
}

This configuration allows image requests with no referrer (direct access), blocked referrer (privacy tools), or referrers matching your domains. All other referrers receive a 403 Forbidden error. This simple technique prevents bandwidth theft while allowing legitimate access.

Logging and Monitoring Strategies

Effective logging provides visibility into server operation, helps troubleshoot issues, and detects security incidents. Nginx offers flexible logging capabilities through access logs (successful requests) and error logs (problems and diagnostics).

Customizing Log Formats

The default combined log format includes basic information, but custom formats provide deeper insights. Define a custom format in the http context:

log_format detailed '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    'rt=$request_time uct="$upstream_connect_time" '
                    'uht="$upstream_header_time" urt="$upstream_response_time"';

This format includes timing information: $request_time (total request processing time), $upstream_connect_time (time to connect to backend), $upstream_header_time (time to receive backend response headers), and $upstream_response_time (time to receive complete backend response). These metrics help identify performance bottlenecks. Apply the custom format: access_log /var/log/nginx/access.log detailed;.

Log Rotation and Management

Without rotation, log files grow indefinitely, consuming disk space and degrading performance. Most Linux distributions include logrotate, which automatically rotates logs. Nginx's logrotate configuration typically resides at /etc/logrotate.d/nginx. Verify it contains:

/var/log/nginx/*.log {
    daily
    missingok
    rotate 14
    compress
    delaycompress
    notifempty
    create 0640 www-data adm
    sharedscripts
    postrotate
        [ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
    endscript
}

This configuration rotates logs daily, keeps 14 days of history, compresses old logs, and signals Nginx to reopen log files after rotation. The kill -USR1 command tells Nginx to reopen log files without interrupting service. Adjust the rotate value based on your retention requirements and disk space availability.

Centralized Logging with Syslog

For multi-server environments, centralized logging simplifies management and analysis. Nginx supports sending logs to syslog:

access_log syslog:server=logserver.example.com:514,tag=nginx_access combined;
error_log syslog:server=logserver.example.com:514,tag=nginx_error warn;

This configuration sends logs to a remote syslog server, tagging them for identification. Combined with tools like ELK Stack (Elasticsearch, Logstash, Kibana) or Graylog, centralized logging enables powerful search, analysis, and alerting capabilities across your entire infrastructure.

Troubleshooting Common Issues

Even properly configured Nginx installations occasionally encounter issues. Understanding common problems and their solutions accelerates troubleshooting and minimizes downtime.

Configuration Testing and Validation

Always test configuration changes before applying them: sudo nginx -t. This command parses configuration files and reports syntax errors without affecting the running server. If the test fails, the error message indicates the file and line number containing the problem. Common syntax errors include missing semicolons, mismatched braces, and invalid directive names. Fix the error and test again before reloading.

For more detailed configuration information, use sudo nginx -T (capital T). This command displays the entire parsed configuration, showing how included files are processed and directives are inherited. This output helps verify that includes work as expected and directives appear in the correct contexts.

Permission and Ownership Issues

Permission errors prevent Nginx from reading files or writing logs. Check file ownership: ls -la /var/www/example.com/html. Files should be owned by your user account or the Nginx user (www-data on Ubuntu/Debian, nginx on CentOS/RHEL) with appropriate read permissions. Directories need execute permissions for Nginx to access their contents. Set correct permissions: sudo chown -R www-data:www-data /var/www/example.com && sudo chmod -R 755 /var/www/example.com.

SELinux on CentOS/RHEL adds another permission layer. If files have correct ownership but Nginx still can't access them, check SELinux contexts: ls -Z /var/www/example.com/html. Files should have the httpd_sys_content_t context. Correct contexts: sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/example.com(/.*)?" && sudo restorecon -Rv /var/www/example.com.

502 Bad Gateway Errors

502 errors indicate Nginx successfully received the request but couldn't get a valid response from the backend. Common causes include:

  • 🔧 Backend server not running: Verify your application server (PHP-FPM, Node.js, etc.) is running: sudo systemctl status php7.4-fpm. Start it if necessary.
  • 🔧 Incorrect proxy_pass address: Verify the address in proxy_pass matches your backend server's actual address and port.
  • 🔧 Firewall blocking backend connections: If your backend runs on a different server, ensure firewalls allow connections from Nginx.
  • 🔧 Backend timeout: If the backend takes too long to respond, increase timeout values: proxy_connect_timeout 60s; proxy_send_timeout 60s; proxy_read_timeout 60s;
  • 🔧 Socket file permissions: For Unix socket connections (common with PHP-FPM), ensure Nginx has permission to access the socket file.

Check Nginx error logs for specific error messages: sudo tail -f /var/log/nginx/error.log. The log usually indicates the exact problem: connection refused, timeout, or permission denied.

Connection Refused Errors

If you can't connect to Nginx at all, several factors might be responsible. Verify Nginx is running: sudo systemctl status nginx. If it's not running, check why it failed to start: sudo journalctl -xeu nginx. Common startup failures include configuration errors (test with nginx -t), port conflicts (another service using port 80/443), or permission issues.

Check firewall rules. On Ubuntu/Debian: sudo ufw status. Ensure Nginx Full or at least HTTP and HTTPS are allowed. On CentOS/RHEL: sudo firewall-cmd --list-all. Verify http and https services are allowed. If rules are missing, add them as described in the installation section.

Verify Nginx is listening on the expected ports: sudo netstat -tlnp | grep nginx or sudo ss -tlnp | grep nginx. You should see entries for ports 80 and 443. If Nginx is listening on 127.0.0.1 only, it's not accessible from external networks—check your listen directives.

High Memory or CPU Usage

Unusual resource consumption indicates configuration problems or attacks. Check current resource usage: top or htop. If Nginx worker processes consume excessive CPU, you might have inefficient regular expressions in location blocks, insufficient caching causing repeated backend requests, or an ongoing DDoS attack. Review access logs for unusual patterns: sudo tail -1000 /var/log/nginx/access.log | cut -d' ' -f1 | sort | uniq -c | sort -rn | head. This command shows the top IP addresses by request count. Unusually high counts from single IPs suggest attacks—implement rate limiting.

High memory usage might indicate cache sizes exceed available RAM. Review cache configurations (fastcgi_cache_path, proxy_cache_path) and reduce max_size parameters if necessary. Check for memory leaks by monitoring memory usage over time: watch -n 5 'ps aux | grep nginx'. If memory continuously increases, restart Nginx and monitor again. Persistent leaks might indicate bugs requiring Nginx updates.

Maintaining and Updating Nginx

Regular maintenance ensures security, performance, and reliability. Establish maintenance routines covering updates, monitoring, backups, and security audits.

Keeping Nginx Updated

Security vulnerabilities are discovered periodically. Subscribe to the Nginx mailing list or monitor security advisories to stay informed about updates. Update Nginx regularly using your package manager. On Ubuntu/Debian: sudo apt update && sudo apt upgrade nginx. On CentOS/RHEL: sudo yum update nginx or sudo dnf update nginx. Before updating production servers, test updates in staging environments to identify potential compatibility issues.

After updating, verify the new version: nginx -v. Test configuration compatibility: sudo nginx -t. If tests pass, reload Nginx: sudo systemctl reload nginx. Monitor error logs and application behavior after updates to catch any issues quickly. Keep rollback plans ready—maintain previous configuration backups and know how to downgrade if necessary.

Configuration Backup Strategies

Back up configurations before making changes. Create timestamped backups: sudo tar -czf /root/nginx-backup-$(date +%Y%m%d-%H%M%S).tar.gz /etc/nginx. This command creates a compressed archive of your entire Nginx configuration directory. Store backups in a separate location—copying them to a remote server prevents data loss if the primary server fails. Implement automated backup scripts running daily via cron.

Version control systems like Git provide superior configuration management. Initialize a Git repository in /etc/nginx: cd /etc/nginx && sudo git init && sudo git add . && sudo git commit -m "Initial configuration". After making changes, commit them: sudo git add . && sudo git commit -m "Description of changes". Git provides complete change history, easy rollbacks, and collaboration features for team environments.

Monitoring and Alerting

Proactive monitoring detects issues before users notice them. Implement monitoring for key metrics: server availability (uptime checks), response times, error rates, and resource usage. Tools like Nagios, Zabbix, or Prometheus with Grafana provide comprehensive monitoring solutions. For simpler setups, UptimeRobot offers free website monitoring with email/SMS alerts.

Nginx provides a basic status page showing active connections and request statistics. Enable it by adding a server block:

server {
    listen 127.0.0.1:8080;
    location /nginx_status {
        stub_status on;
        access_log off;
        allow 127.0.0.1;
        deny all;
    }
}

This configuration makes the status page available only from localhost on port 8080. Access it: curl http://127.0.0.1:8080/nginx_status. The output shows active connections, accepts, handled, and requests. Monitor these metrics over time to establish baselines and detect anomalies.

Nginx Plus (commercial version) includes an extended status module with detailed metrics and a real-time dashboard. For open-source Nginx, third-party modules like nginx-module-vts provide enhanced monitoring capabilities. Alternatively, parse access logs with tools like GoAccess for real-time web log analysis.

What's the difference between Nginx and Apache?

Nginx uses an asynchronous, event-driven architecture that handles thousands of concurrent connections with minimal memory overhead, making it extremely efficient for serving static content and acting as a reverse proxy. Apache uses a process-driven or thread-driven model, creating new processes or threads for each connection, which consumes more resources under high concurrency. Apache offers more mature module ecosystems and per-directory configuration through .htaccess files, while Nginx requires configuration changes at the server level. Many modern infrastructures use both: Nginx as a frontend proxy for static content and SSL termination, with Apache handling dynamic content behind it. The choice depends on specific requirements—Nginx excels at high-concurrency scenarios and reverse proxying, while Apache provides more flexible configuration options and broader third-party module support.

How do I redirect HTTP to HTTPS in Nginx?

Create a separate server block that listens on port 80 and returns a 301 permanent redirect to the HTTPS version. The configuration looks like: server { listen 80; server_name example.com; return 301 https://$server_name$request_uri; }. The $server_name variable contains the domain name, and $request_uri preserves the requested path and query string, ensuring users land on the correct page after redirection. Place this server block before your HTTPS server block in the configuration file. After testing with nginx -t, reload Nginx with systemctl reload nginx. All HTTP requests will automatically redirect to HTTPS, ensuring encrypted connections for all visitors. For enhanced security, add the Strict-Transport-Security header in your HTTPS server block to tell browsers to always use HTTPS for your domain.

Why am I getting 502 Bad Gateway errors?

502 Bad Gateway errors indicate Nginx successfully received the request but couldn't get a valid response from the backend server. Common causes include the backend application server not running (verify with systemctl status for your application service), incorrect proxy_pass addresses in your Nginx configuration, firewall rules blocking connections between Nginx and the backend, timeout values too low for slow backend responses, or permission issues with Unix socket files. Check Nginx error logs with tail -f /var/log/nginx/error.log to see specific error messages. The logs typically indicate whether the issue is connection refused (backend not running), timeout (backend too slow), or permission denied (socket file permissions). Systematically verify each potential cause: ensure the backend is running, confirm the proxy_pass address matches the backend's actual address and port, check firewall rules, increase timeout values if necessary, and verify socket file permissions allow Nginx access.

How do I configure Nginx for multiple websites on one server?

Use server blocks (virtual hosts) to host multiple websites on a single Nginx instance. Create separate configuration files for each site in /etc/nginx/sites-available/ (Ubuntu/Debian) or /etc/nginx/conf.d/ (CentOS/RHEL). Each server block defines its own server_name (domain), root (document root directory), and other site-specific settings. On Ubuntu/Debian, create symbolic links from sites-available to sites-enabled to activate sites. Each site should have its own document root directory (like /var/www/site1.com/html and /var/www/site2.com/html) and ideally separate log files for easier troubleshooting. Nginx matches incoming requests to server blocks based on the Host header—when a request arrives for site1.com, Nginx serves content from that site's configuration; requests for site2.com use the other configuration. This approach allows unlimited websites on a single server, limited only by available resources.

How can I improve Nginx performance?

Performance optimization involves multiple strategies. Set worker_processes auto; to create one worker per CPU core, and increase worker_connections to 2048 or higher based on expected traffic. Enable gzip compression for text-based content (HTML, CSS, JavaScript) to reduce bandwidth usage. Implement browser caching with appropriate expiration times for static assets—long expiration for rarely-changing files like logos, shorter for frequently-updated content. For dynamic content, implement FastCGI caching or proxy caching to store generated pages and serve them without executing backend code for every request. Enable keepalive connections to backend servers to reduce connection overhead. Use HTTP/2 for improved performance with modern browsers. Optimize SSL/TLS settings by enabling session caching and OCSP stapling. Serve static files directly from Nginx rather than proxying to application servers. Monitor performance metrics to identify bottlenecks—high backend response times suggest application optimization needs; high connection counts might require increased worker_connections. Each optimization should be tested and measured to verify actual improvements for your specific workload.

What are the minimum server requirements for Nginx?

Nginx itself is extremely lightweight and can run on minimal hardware—a server with 512MB RAM and a single CPU core can handle a surprising amount of traffic for static websites. However, actual requirements depend entirely on your use case. A simple static content server needs far less than a reverse proxy handling SSL termination, compression, and caching for multiple backend applications. For production environments serving moderate traffic, start with at least 1GB RAM and 2 CPU cores. High-traffic sites benefit from 4GB+ RAM and 4+ CPU cores. Disk space requirements depend on log retention policies and cache sizes—allocate at least 10GB for the system, logs, and caches. SSD storage significantly improves performance for cache-heavy configurations. Network bandwidth often becomes the bottleneck before CPU or memory—ensure adequate bandwidth for your expected traffic. Monitor actual resource usage during peak traffic periods and scale accordingly. Nginx's efficiency means you'll often find other components (databases, application servers) require more resources than Nginx itself.