马来西亚云服务

server { listen 80; server_name example.com www.example.com; locations / { # directives go here } }

D. The proxy_pass Directive

The proxy_pass directive is the heart of Nginx’s reverse proxy functionality. It tells Nginx to forward requests to another server. For example:

proxy_pass http://localhost:3000;

This single directive forwards all matching requests to your backend application running on port 3000.

6. Step 3 — Configuring Nginx as a Basic Reverse Proxy

Now let’s create an actual reverse proxy configuration. We will create a new server block file for your domain.

A. Create a New Server Block Configuration File

Create a new configuration file in sites-available:

sudo nano /etc/nginx/sites-available/myapp

Add the following configuration (replace example.com with your domain or server IP):

server {
    listen 80;
    server_name example.com www.example.com;

    locations / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_cache_bypass $http_upgrade;
    }
}

B. Enable the Configuration

Create a symbolic link from sites-available to sites-enabled to activate the configuration:

sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/

C. Remove the Default Site (Optional)

If you want your new site to be the default, remove the default configuration:

sudo rm /etc/nginx/sites-enabled/default

D. Test the Nginx Configuration

Always test the configuration for syntax errors before reloading:

sudo nginx -t

If the configuration is valid, you will see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

E. Reload Nginx

Apply the changes by reloading Nginx:

sudo systemctl reload nginx

💡 Tip: Use ‘reload’ instead of ‘restart’ whenever possible. Reload applies configuration changes gracefully without dropping existing connections.

7. Step 4 — Setting Up Proxy Headers for Better Performance

Setting Up Proxy Headers for Better Performance

Proxy headers are crucial for ensuring that your backend application receives accurate information about the original client request. Without proper headers, your backend will only see requests from 127.0.0.1 (the Nginx server itself) instead of the real client IP addresses.

A. Essential Proxy Headers Explained

proxy_set_header Host $host;

Passes the original Host header from the client request to the backend. This is important when your backend serves multiple domains.

proxy_set_header X-Real-IP $remote_addr;

Passes the real IP address of the client to the backend. Without this, your application logs will show Nginx’s IP instead of the real visitor’s IP.

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

Passes a comma-separated list of IP addresses representing the client and any intermediate proxies. Useful for tracking the full proxy chain.

proxy_set_header X-Forwarded-Proto $scheme;

Tells the backend whether the original request came over HTTP or HTTPS. This is especially important when your backend needs to generate correct redirect URLs.

B. Timeout Configuration

You can also configure connection and response timeouts to avoid hanging connections:

proxy_connect_timeout 60s;
proxy_send_timeout    60s;
proxy_read_timeout    60s;

C. WebSocket Support

If your application uses WebSockets (e.g., Socket.IO, real-time apps), add these additional headers:

proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_http_version 1.1;

8. Step 5 — Securing the Reverse Proxy with SSL/TLS (HTTPS)

Securing the Reverse Proxy with SSL/TLS (HTTPS)

Running your reverse proxy over HTTPS is essential for security. It encrypts all traffic between your users and your server, protects sensitive data, and is required for modern browser features. Let’s Encrypt provides free, trusted SSL certificates that are easy to install.

A. Install Certbot

Certbot is the official client for Let’s Encrypt. Install Certbot and the Nginx plugin:

sudo apt install certbot python3-certbot-nginx -y

B. Obtain an SSL Certificate

Run Certbot with the --nginx flag to automatically obtain and configure a certificate for your domain:

sudo certbot --nginx -d example.com -d www.example.com

Certbot will ask for your email address, prompt you to agree to the terms of service, and then automatically modify your Nginx configuration to enable HTTPS. It will also ask if you want to redirect all HTTP traffic to HTTPS — select Yes (option 2).

C. Verify Auto-Renewal

Let’s Encrypt certificates expire every 90 days. Certbot installs a systemd timer to automatically renew them. Verify the renewal process works correctly:

sudo certbot renew --dry-run

📝 Note: If the dry-run completes without errors, your certificates will be renewed automatically before they expire.

D. Your Final HTTPS Configuration

After running Certbot, your Nginx configuration will look similar to this:

server {
    listen 443 ssl;
    server_name example.com www.example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    locations / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$host$request_uri;
}

9. Step 6 — Configuring Load Balancing with Nginx (Optional)

Configuring Load Balancing with Nginx

One of Nginx’s most powerful features is its ability to distribute traffic across multiple backend servers. This is called load balancing, and it helps ensure your application remains available and responsive even under heavy traffic.

A. Defining an Upstream Block

To configure load balancing, you define a group of backend servers using the upstream directive:

upstream myapp_backend {
    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
}

server {
    listen 80;
    server_name example.com;

    locations / {
        proxy_pass http://myapp_backend;
    }
}

B. Load Balancing Methods

upstream myapp_backend {
    least_conn;  # or ip_hash;
    server 127.0.0.1:3000;
    server 127.0.0.1:3001;
}

C. Adding Server Weights

You can assign weights to servers to direct more traffic to more powerful machines:

upstream myapp_backend {
    server 127.0.0.1:3000 weight=3;  # receives 3x more traffic
    server 127.0.0.1:3001 weight=1;
}

10. Step 7 — Enabling Caching in Nginx Reverse Proxy (Optional)

Enabling Caching in Nginx Reverse Proxy

Proxy caching allows Nginx to store responses from your backend server and serve them directly to subsequent clients. This dramatically reduces the load on your backend and improves response times for your users.

A. Configure the Cache Path

First, define the cache storage locations in the http block of /etc/nginx/nginx.conf:

http {
    proxy_cache_path /var/cache/nginx
        levels=1:2
        keys_zone=my_cache:10m
        max_size=1g
        inactive=60m
        use_temp_path=off;
    ...
}

B. Enable Caching in Your Server Block

locations / {
    proxy_cache my_cache;
    proxy_pass http://localhost:3000;
    proxy_cache_valid 200 302 10m;
    proxy_cache_valid 404 1m;
    add_header X-Proxy-Cache $upstream_cache_status;
}

The X-Proxy-Cache header in the response will show HIT when Nginx serves from cache, MISS when it fetches from the backend, and BYPASS when caching is intentionally skipped.

11. Troubleshooting Common Nginx Reverse Proxy Issues

Troubleshooting Common Nginx Reverse Proxy Issues

A. 502 Bad Gateway

This is the most common error with reverse proxies. It means Nginx successfully received the request from the client but could not get a valid response from the backend server.

Common causes and fixes:

B. 504 Gateway Timeout

A 504 error means the backend server took too long to respond. Fix this by increasing the proxy timeout values:

proxy_read_timeout 300s;
proxy_connect_timeout 300s;
proxy_send_timeout 300s;

C. Permission Denied Errors

If you see permission errors in the Nginx error log, it may be due to SELinux or file permission issues. Check the Nginx error log:

sudo tail -f /var/log/nginx/error.log

D. Nginx Not Forwarding Headers

If your backend application is not receiving the correct client IP, ensure your proxy headers are correctly configured and that your application is reading the right header (X-Real-IP or X-Forwarded-For).

12. Best Practices for Nginx Reverse Proxy Configuration

13. Conclusion

Congratulations! You have successfully configured Nginx as a reverse proxy on Ubuntu. Throughout this guide, you have covered everything from the basics of what a reverse proxy is to advanced topics like load balancing, SSL configuration, and caching.

Here is a quick summary of what you accomplished:

Nginx is a powerful and flexible tool that forms the backbone of many production web architectures. As your next steps, consider exploring Docker and Nginx together, using Nginx as a Kubernetes Ingress controller, or exploring Nginx Plus for enterprise-grade features.

14. Frequently Asked Questions (FAQs)

What is the difference between Nginx and Apache as a reverse proxy?

Both Nginx and Apache can serve as reverse proxies, but Nginx is generally preferred for this role due to its event-driven architecture, lower memory usage under high concurrency, and simpler configuration syntax. Apache uses a thread-based model, which consumes more resources under heavy load.

Can Nginx act as both a web server and a reverse proxy?

Yes. Nginx can serve static files directly (acting as a web server) while simultaneously proxying dynamic requests to a backend application server. This is a very common pattern — Nginx handles static assets efficiently while forwarding API and dynamic requests to Node.js, Python, or other backend services.

How do I test if my Nginx reverse proxy is working?

You can test using curl from the command line: curl -I http://your-domain.com. Check the response headers — you should see responses coming from your backend application. You can also add a custom header in Nginx and verify it appears in the response. For a quick functional test, simply open your domain in a browser and confirm your application loads.

Is Nginx reverse proxy free to use?

Yes, the open-source version of Nginx (nginx.org) is completely free and includes all the reverse proxy, load balancing, and caching features covered in this guide. Nginx Plus (nginx.com) is a commercial version that adds advanced features like active health checks, JWT authentication, and an API dashboard.

Can I use Nginx reverse proxy with Docker?

Absolutely. Nginx is frequently used as a reverse proxy in Docker environments. You can run Nginx in a Docker container and proxy requests to other containers using Docker’s internal DNS for container names. Tools like nginx-proxy and Traefik automate this pattern, but a manually configured Nginx container gives you the most control.

Kaif

Share
Published by
Kaif
1 month ago

Recent Posts

Is WordPress 6.9 a Game Changer? Here’s a Look

1. Introduction WordPress 6.9, codenamed "Gene," is the final major release of 2025 and one…

7 days ago

Docker vs Kubernetes: Containerization Showdown

1. Introduction to Containerization 1.1 What Is Containerization and Why It Matters Modern software development…

1 week ago

How to Set Up n8n? A Step-by-Step Guide for Self-Hosted Workflow Automation

1. Introduction If you've ever wanted to automate repetitive tasks — like syncing data between…

3 weeks ago

Top Survival Games Perfect for Dedicated Server Hosting

Introduction Survival games have become one of the most enduring and beloved genres in modern…

1 month ago

Containerize and Deploy Node.js Applications With VPS Malaysia

1. What is Node.js? Node.js lets you use JavaScript to build the "brain" of a…

1 month ago