Linux – 19,000 failed root password attempts in auth.log in 2 days

authenticationlinuxssh

I was debugging some login issues and happened to notice a lot of failed root password attempts in my /var/log/auth.log This is a VPS Linode.

I grep'd to find 19K attempts in the past two days.

I know I'm supposed to move sshd to a different port, but it seems like a pain to remember and use, and a bunch of scripts would need to be updated. Besides, couldn't the attacker just portscan to find the new port?

I have a very strong password set for root, so I'm not too concerned.

And I guess it's only 6 per minute…. so probably not a performance concern.

But it doesn't seem ideal.

Any thoughts on how to block it or prevent it?

Seems to be coming from a rather small list of IP addresses… maybe I could block those? I checked a few, and they seem to be in China.

Another option would be to disable root login and setup an sudo user with a unique username. But I don't think that will help with this particular problem—people can still TRY to ssh in as root…

UPDATE:
After apt-get install fail2ban with default settings, I saw a 6-8x reduction in the number of failed root attempts in auth.log:

root@localhost:/var/log# grep "Failed password for root" auth.log | wc
    21301  327094 2261733
root@localhost:/var/log# grep "May  1.*Failed password for root" auth.log | wc
    6217   95973  664165
root@localhost:/var/log# grep "May  2.*Failed password for root" auth.log | wc
    8370  127280  880779
root@localhost:/var/log# grep "May  3.*Failed password for root" auth.log | wc
    1030   16250  111837

I also eliminated root ssh password authentication, and switched to key-based login only for root, by following this: https://unix.stackexchange.com/questions/99307/permit-root-to-login-via-ssh-only-with-key-based-authentication

This doesn't stop failed password entries from appearing in auth.log, though. It just means that, even if they know the password, they can't get through. So, fail2ban reduces those log entries, and key-based ssh for root provides actual security.

Finally, as suggested, I have implemented IP whitelisting for port 22 to completely eliminate the log entries. For reference, this can be done on Ubuntu with ufw like so:

apt-get install ufw
ufw allow from 127.198.4.3 to any port 22
ufw --force enable

I'm using –force in there because I'm doing this in scripts, non-interactively, when I spin up new nodes.

HOWEVER, to deal with access from my dynamic IP, without mucking with Linode's web-based Lish console every time my IP changes, I'm using a hybrid approach.

I have a main server, hosting my main domain, and the other nodes are spun up as needed, as sub-servers on sub-domains. The main server houses the priv key that is used to ssh into the sub-servers. It is also the IP that is whitelisted for ssh.

But the main server uses NO ssh keys, nor an IP whitelist. I want to be able to access it from anywhere in an emergency, even if I don't have the key file with me, or even from a place where I wouldn't trust to install the key file. I need something like password access, but safe in a key-logging or ssh-compromised environment.

The solution to that, which I've been using for years, is hardware-based 2-factor-authentication, using a YubiKey USB device, and the Yubico PAM module installed on the main server.

So, even if the attacker has the root password to the main server, they can't get in without my YubiKey. And that's easy to carry with me. I can access root on my server safely from the dirtiest internet cafe in the world.

And after reaching the main server, I can ssh into any of the sub-servers with no password.

So I still need fail2ban on the main server, to cut down on the auth.log entries. It's not needed after all on the sub-servers, because IP whitelisting takes care of the problem.

Best Answer

Trying to block specific IP ranges from brute force attacks is not the ideal way to approach the issue. There are a number of botnets and servers constantly scanning the internet and attempting to compromise servers and devices. It would be extremely inefficient to attempt to block out the traffic, so your best option is to either mitigate it or avoid it all together.

One way to mitigate the number of connections you see is to change the SSH port. Most attackers take the shotgun approach to compromising hosts, so as long as you pick a non-standard alternate port you should not see many SSH attempts unless it is a targeted attack. It takes a lot of time to scan all the ports on the internet, so while this isn't the best approach it can be considered a way to mitigate some attacks.

Another mitigation method is to set up something like Fail2Ban to automatically blacklist IPs that fail to authenticate multiple times. This can mitigate some of the attacks, but is not very effective these days as most brute force attacks come from distributed hosts.

The best way to handle SSH security is to limit access to the service itself. This can be done by whitelisting IPs that are allowed to access your SSH port, and by setting up key based authentication and then disabling password authentication. If the attacker can't reach the SSH port or never has a chance to try a password, there is little worry about a brute force attack.

Related Topic