The proxy_pass directive is the heart of Nginx’s reverse proxy functionality. It tells Nginx to forward requests to another server. For example:
proxy_pass http://localhost:3000; This single directive forwards all matching requests to your backend application running on port 3000.
Now let’s create an actual reverse proxy configuration. We will create a new server block file for your domain.
Create a new configuration file in sites-available:
sudo nano /etc/nginx/sites-available/myapp Add the following configuration (replace example.com with your domain or server IP):
server {
listen 80;
server_name example.com www.example.com;
locations / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
} Create a symbolic link from sites-available to sites-enabled to activate the configuration:
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/ If you want your new site to be the default, remove the default configuration:
sudo rm /etc/nginx/sites-enabled/default Always test the configuration for syntax errors before reloading:
sudo nginx -t If the configuration is valid, you will see:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful Apply the changes by reloading Nginx:
sudo systemctl reload nginx 💡 Tip: Use ‘reload’ instead of ‘restart’ whenever possible. Reload applies configuration changes gracefully without dropping existing connections.
Proxy headers are crucial for ensuring that your backend application receives accurate information about the original client request. Without proper headers, your backend will only see requests from 127.0.0.1 (the Nginx server itself) instead of the real client IP addresses.
proxy_set_header Host $host;Passes the original Host header from the client request to the backend. This is important when your backend serves multiple domains.
proxy_set_header X-Real-IP $remote_addr;Passes the real IP address of the client to the backend. Without this, your application logs will show Nginx’s IP instead of the real visitor’s IP.
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;Passes a comma-separated list of IP addresses representing the client and any intermediate proxies. Useful for tracking the full proxy chain.
proxy_set_header X-Forwarded-Proto $scheme;Tells the backend whether the original request came over HTTP or HTTPS. This is especially important when your backend needs to generate correct redirect URLs.
You can also configure connection and response timeouts to avoid hanging connections:
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s; If your application uses WebSockets (e.g., Socket.IO, real-time apps), add these additional headers:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_http_version 1.1; Running your reverse proxy over HTTPS is essential for security. It encrypts all traffic between your users and your server, protects sensitive data, and is required for modern browser features. Let’s Encrypt provides free, trusted SSL certificates that are easy to install.
Certbot is the official client for Let’s Encrypt. Install Certbot and the Nginx plugin:
sudo apt install certbot python3-certbot-nginx -y Run Certbot with the --nginx flag to automatically obtain and configure a certificate for your domain:
sudo certbot --nginx -d example.com -d www.example.com Certbot will ask for your email address, prompt you to agree to the terms of service, and then automatically modify your Nginx configuration to enable HTTPS. It will also ask if you want to redirect all HTTP traffic to HTTPS — select Yes (option 2).
Let’s Encrypt certificates expire every 90 days. Certbot installs a systemd timer to automatically renew them. Verify the renewal process works correctly:
sudo certbot renew --dry-run 📝 Note: If the dry-run completes without errors, your certificates will be renewed automatically before they expire.
After running Certbot, your Nginx configuration will look similar to this:
server {
listen 443 ssl;
server_name example.com www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
locations / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;
} One of Nginx’s most powerful features is its ability to distribute traffic across multiple backend servers. This is called load balancing, and it helps ensure your application remains available and responsive even under heavy traffic.
To configure load balancing, you define a group of backend servers using the upstream directive:
upstream myapp_backend {
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
}
server {
listen 80;
server_name example.com;
locations / {
proxy_pass http://myapp_backend;
}
} least_conn; inside the upstream block.ip_hash; inside the upstream block.upstream myapp_backend {
least_conn; # or ip_hash;
server 127.0.0.1:3000;
server 127.0.0.1:3001;
} You can assign weights to servers to direct more traffic to more powerful machines:
upstream myapp_backend {
server 127.0.0.1:3000 weight=3; # receives 3x more traffic
server 127.0.0.1:3001 weight=1;
} Proxy caching allows Nginx to store responses from your backend server and serve them directly to subsequent clients. This dramatically reduces the load on your backend and improves response times for your users.
First, define the cache storage locations in the http block of /etc/nginx/nginx.conf:
http {
proxy_cache_path /var/cache/nginx
levels=1:2
keys_zone=my_cache:10m
max_size=1g
inactive=60m
use_temp_path=off;
...
} locations / {
proxy_cache my_cache;
proxy_pass http://localhost:3000;
proxy_cache_valid 200 302 10m;
proxy_cache_valid 404 1m;
add_header X-Proxy-Cache $upstream_cache_status;
} The X-Proxy-Cache header in the response will show HIT when Nginx serves from cache, MISS when it fetches from the backend, and BYPASS when caching is intentionally skipped.
This is the most common error with reverse proxies. It means Nginx successfully received the request from the client but could not get a valid response from the backend server.
Common causes and fixes:
ss -tlnp | grep 3000.A 504 error means the backend server took too long to respond. Fix this by increasing the proxy timeout values:
proxy_read_timeout 300s;
proxy_connect_timeout 300s;
proxy_send_timeout 300s; If you see permission errors in the Nginx error log, it may be due to SELinux or file permission issues. Check the Nginx error log:
sudo tail -f /var/log/nginx/error.log If your backend application is not receiving the correct client IP, ensure your proxy headers are correctly configured and that your application is reading the right header (X-Real-IP or X-Forwarded-For).
sudo nginx -t before every nginx reload to catch configuration errors.proxy_read_timeout and proxy_connect_timeout based on your application’s expected response times.gzip on; and gzip_types text/plain application/json application/javascript text/css; in your nginx.conf code to reduce bandwidth./var/log/nginx/access.log and /var/log/nginx/error.log for issues.limit_req_zone to prevent abuse and DDoS attacks.sudo apt update && sudo apt upgrade nginx to stay on the latest stable version.Congratulations! You have successfully configured Nginx as a reverse proxy on Ubuntu. Throughout this guide, you have covered everything from the basics of what a reverse proxy is to advanced topics like load balancing, SSL configuration, and caching.
Here is a quick summary of what you accomplished:
Nginx is a powerful and flexible tool that forms the backbone of many production web architectures. As your next steps, consider exploring Docker and Nginx together, using Nginx as a Kubernetes Ingress controller, or exploring Nginx Plus for enterprise-grade features.
Both Nginx and Apache can serve as reverse proxies, but Nginx is generally preferred for this role due to its event-driven architecture, lower memory usage under high concurrency, and simpler configuration syntax. Apache uses a thread-based model, which consumes more resources under heavy load.
Yes. Nginx can serve static files directly (acting as a web server) while simultaneously proxying dynamic requests to a backend application server. This is a very common pattern — Nginx handles static assets efficiently while forwarding API and dynamic requests to Node.js, Python, or other backend services.
You can test using curl from the command line: curl -I http://your-domain.com. Check the response headers — you should see responses coming from your backend application. You can also add a custom header in Nginx and verify it appears in the response. For a quick functional test, simply open your domain in a browser and confirm your application loads.
Yes, the open-source version of Nginx (nginx.org) is completely free and includes all the reverse proxy, load balancing, and caching features covered in this guide. Nginx Plus (nginx.com) is a commercial version that adds advanced features like active health checks, JWT authentication, and an API dashboard.
Absolutely. Nginx is frequently used as a reverse proxy in Docker environments. You can run Nginx in a Docker container and proxy requests to other containers using Docker’s internal DNS for container names. Tools like nginx-proxy and Traefik automate this pattern, but a manually configured Nginx container gives you the most control.
1. Introduction WordPress 6.9, codenamed "Gene," is the final major release of 2025 and one…
1. Introduction to Containerization 1.1 What Is Containerization and Why It Matters Modern software development…
1. Introduction If you've ever wanted to automate repetitive tasks — like syncing data between…
Introduction Survival games have become one of the most enduring and beloved genres in modern…
1. What is Node.js? Node.js lets you use JavaScript to build the "brain" of a…