For the last few days we have been gradually launching a new AI-based bot prevention system on our servers developed by our own DevOps specialists. We are already seeing amazing results from the operation of the system. Each hour it blocks between 500 000 and 2 million brute-force attempts across all our servers. Thus, we have prevented an unknown number of potential unauthorized logins, but what is even more important -- we have managed to save an enormous amount of server resources that can now be used for a meaningful and legitimate activity by our users.
Why are bots a problem?
Malicious traffic is an enormous problem that probably affects every single website that is online. This traffic is usually created by bots trying to gain access to your site by brute-forcing its login. The bots perform multiple login attempts using different combinations of usernames and passwords. Actually, if you have a strong password, the chance of a successful bot login is minimal. However, this activity is still a serious problem. In their login attempts the bots use huge amount of server resources. For a personal blog, for example, it can exceed multiple times the legitimate traffic created by the real human visitors. Even if bot activity is not in big volumes resulting in denial of service, it can still make your hosting more costly by causing you to go over your account resources. The reason for that is that the account has to handle not only your legitimate visitors traffic, but unwanted bot traffic as well.
How does our system work?
Artificial Intelligence analyzes data from multiple servers
The main difficulty with fighting the bot activity is that bots are very clever and elusive. Bot attacks use different IPs and user agents, and often the data from attempts aimed at a single site login, or even a single server, is not good enough to determine a brute-forcing bot. We have had brute-force prevention system on each of our servers for a long time, but the new AI is much more efficient as it is able to collect and analyze simultaneously the data from all our servers. Based on the results of the analysis it can also automatically apply actions to stop unwanted bots. There are numerous indicators that our AI monitors in order to detect malicious behaviour patterns and block bad traffic. Some of them are:
- Failed login attempts in the majority of popular web applications - WordPress, Drupal, Joomla, Magento, etc.
- Number of simultaneous connections to different URLs
- Different request types and known DDoS vulnerabilities in applications
- Dynamic list of bad user agents that’s constantly being updated
We have introduced challenge captcha page
Once our system flags a certain IP address or user agent as malicious, it’s been immediately blocked and challenged with a Captcha page. The system is learning continuously how to minimize false positives. If a human visitor reaches the captcha page and solves it, the address/agent related with this solution is whitelisted. In case the captcha page persists (e.g. you see it more than once for 24 hours), please contact our support.