Nginx Rate limiting on actual Client IP address

Yatin Gupta
2 min readAug 7, 2021

Nginx allows limiting the number of requests a user can make and thus providing an efficient security feature.

Now, if we search for rate-limiting in nginx on user’s IP address, that is, allowing only a certain number of requests from a given IP in a given period of time, we will encounter this.

limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;

server {
location /login/ {
limit_req zone=mylimit;

proxy_pass http://my_upstream;
}
}
Ref : https://www.nginx.com/blog/rate-limiting-nginx/

and good enough this works but only for simpler applications or infrastructure setups.

What if your application is behind a load balancer?

Since now the load balancer is making the requests to your server, the server receives the IP address of the load balancer and thus starts limiting the internal requests. This will limit the total number of requests that can be made to your server irrespective of the client's IP address.

Nginx IRL limiting the requests using load balancer’s ip address

How to apply rate-limiting on actual client IP address in these cases?

Using set_real_ip_from module in Nginx: http://nginx.org/en/docs/http/ngx_http_realip_module.html

Basically, we need to define that X-Forwarded-For needs to be used to get the actual client IP. X-Forwarded-For contains a list of all IP addresses where the requests have been routed from including the originating IP Address.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For

So the Nginx configuration will look like this:

The http block

http {      set_real_ip_from 0.0.0.0/0;      real_ip_recursive on;      real_ip_header X-Forwarded-For;      log_format json_access escape=json // This can be added to log
// ip addresses
'{' '"proxy_add_x_forwarded_for": "$proxy_add_x_forwarded_for"'' }'; access_log /var/log/nginx/access.log json_access;
}

The Server Block File

limit_req_zone $binary_remote_addr zone=mylimit:10m rate=30r/m;server {        location /api {        proxy_pass http://web-app:3000/api;        limit_req zone=mylimit burst=20 nodelay;
}
}

Refer to links provided for meaning and uses for configurations used above.

This was big learning when one of our clients went live with a high load and got rate-limited, and thus having a degraded performance.

--

--