My server is under DDOS attacks and I want to block the IP that is doing it, what logs should I be looking for to determine the attacker's IP?
Linux Apache DDOS – How to Find Out IPs During a DDOS Attack
apache-2.2ddoslinux
Related Solutions
When deciding what permissions to use, you need to know exactly who your users are and what they need. A webserver interacts with two types of user.
Authenticated users have a user account on the server and can be provided with specific privileges. This usually includes system administrators, developers, and service accounts. They usually make changes to the system using SSH or SFTP.
Anonymous users are the visitors to your website. Although they don't have permissions to access files directly, they can request a web page and the web server acts on their behalf. You can limit the access of anonymous users by being careful about what permissions the web server process has. On many Linux distributions, Apache runs as the www-data
user but it can be different. Use ps aux | grep httpd
or ps aux | grep apache
to see what user Apache is using on your system.
Notes on linux permissions
Linux and other POSIX-compliant systems use traditional unix permissions. There is an excellent article on Wikipedia about Filesystem permissions so I won't repeat everything here. But there are a few things you should be aware of.
The execute bit
Interpreted scripts (eg. Ruby, PHP) work just fine without the execute permission. Only binaries and shell scripts need the execute bit. In order to traverse (enter) a directory, you need to have execute permission on that directory. The webserver needs this permission to list a directory or serve any files inside of it.
Default new file permissions
When a file is created, it normally inherits the group id of whoever created it. But sometimes you want new files to inherit the group id of the folder where they are created, so you would enable the SGID bit on the parent folder.
Default permission values depend on your umask. The umask subtracts permissions from newly created files, so the common value of 022 results in files being created with 755. When collaborating with a group, it's useful to change your umask to 002 so that files you create can be modified by group members. And if you want to customize the permissions of uploaded files, you either need to change the umask for apache or run chmod after the file has been uploaded.
The problem with 777
When you chmod 777
your website, you have no security whatsoever. Any user on the system can change or delete any file in your website. But more seriously, remember that the web server acts on behalf of visitors to your website, and now the web server is able to change the same files that it's executing. If there are any programming vulnerabilities in your website, they can be exploited to deface your website, insert phishing attacks, or steal information from your server without you ever knowing.
Additionally, if your server runs on a well-known port (which it should to prevent non-root users from spawning listening services that are world-accessible), that means your server must be started by root (although any sane server will immediately drop to a less-privileged account once the port is bound). In other words, if you're running a webserver where the main executable is part of the version control (e.g. a CGI app), leaving its permissions (or, for that matter, the permissions of the containing directory, since the user could rename the executable) at 777 allows any user to run any executable as root.
Define the requirements
- Developers need read/write access to files so they can update the website
- Developers need read/write/execute on directories so they can browse around
- Apache needs read access to files and interpreted scripts
- Apache needs read/execute access to serveable directories
- Apache needs read/write/execute access to directories for uploaded content
Maintained by a single user
If only one user is responsible for maintaining the site, set them as the user owner on the website directory and give the user full rwx permissions. Apache still needs access so that it can serve the files, so set www-data as the group owner and give the group r-x permissions.
In your case, Eve, whose username might be eve
, is the only user who maintains contoso.com
:
chown -R eve contoso.com/
chgrp -R www-data contoso.com/
chmod -R 750 contoso.com/
chmod g+s contoso.com/
ls -l
drwxr-s--- 2 eve www-data 4096 Feb 5 22:52 contoso.com
If you have folders that need to be writable by Apache, you can just modify the permission values for the group owner so that www-data has write access.
chmod g+w uploads
ls -l
drwxrws--- 2 eve www-data 4096 Feb 5 22:52 uploads
The benefit of this configuration is that it becomes harder (but not impossible*) for other users on the system to snoop around, since only the user and group owners can browse your website directory. This is useful if you have secret data in your configuration files. Be careful about your umask! If you create a new file here, the permission values will probably default to 755. You can run umask 027
so that new files default to 640 (rw- r-- ---
).
Maintained by a group of users
If more than one user is responsible for maintaining the site, you will need to create a group to use for assigning permissions. It's good practice to create a separate group for each website, and name the group after that website.
groupadd dev-fabrikam
usermod -a -G dev-fabrikam alice
usermod -a -G dev-fabrikam bob
In the previous example, we used the group owner to give privileges to Apache, but now that is used for the developers group. Since the user owner isn't useful to us any more, setting it to root is a simple way to ensure that no privileges are leaked. Apache still needs access, so we give read access to the rest of the world.
chown -R root fabrikam.com
chgrp -R dev-fabrikam fabrikam.com
chmod -R 775 fabrikam.com
chmod g+s fabrikam.com
ls -l
drwxrwsr-x 2 root dev-fabrikam 4096 Feb 5 22:52 fabrikam.com
If you have folders that need to be writable by Apache, you can make Apache either the user owner or the group owner. Either way, it will have all the access it needs. Personally, I prefer to make it the user owner so that the developers can still browse and modify the contents of upload folders.
chown -R www-data uploads
ls -l
drwxrwxr-x 2 www-data dev-fabrikam 4096 Feb 5 22:52 uploads
Although this is a common approach, there is a downside. Since every other user on the system has the same privileges to your website as Apache does, it's easy for other users to browse your site and read files that may contain secret data, such as your configuration files.
You can have your cake and eat it too
This can be futher improved upon. It's perfectly legal for the owner to have less privileges than the group, so instead of wasting the user owner by assigning it to root, we can make Apache the user owner on the directories and files in your website. This is a reversal of the single maintainer scenario, but it works equally well.
chown -R www-data fabrikam.com
chgrp -R dev-fabrikam fabrikam.com
chmod -R 570 fabrikam.com
chmod g+s fabrikam.com
ls -l
dr-xrwx--- 2 www-data dev-fabrikam 4096 Feb 5 22:52 fabrikam.com
If you have folders that need to be writable by Apache, you can just modify the permission values for the user owner so that www-data has write access.
chmod u+w uploads
ls -l
drwxrwx--- 2 www-data dev-fabrikam 4096 Feb 5 22:52 fabrikam.com
One thing to be careful about with this solution is that the user owner of new files will match the creator instead of being set to www-data. So any new files you create won't be readable by Apache until you chown them.
*Apache privilege separation
I mentioned earlier that it's actually possible for other users to snoop around your website no matter what kind of privileges you're using. By default, all Apache processes run as the same www-data user, so any Apache process can read files from all other websites configured on the same server, and sometimes even make changes. Any user who can get Apache to run a script can gain the same access that Apache itself has.
To combat this problem, there are various approaches to privilege separation in Apache. However, each approach comes with various performance and security drawbacks. In my opinion, any site with higher security requirements should be run on a dedicated server instead of using VirtualHosts on a shared server.
Additional considerations
I didn't mention it before, but it's usually a bad practice to have developers editing the website directly. For larger sites, you're much better off having some kind of release system that updates the webserver from the contents of a version control system. The single maintainer approach is probably ideal, but instead of a person you have automated software.
If your website allows uploads that don't need to be served out, those uploads should be stored somewhere outside the web root. Otherwise, you might find that people are downloading files that were intended to be secret. For example, if you allow students to submit assignments, they should be saved into a directory that isn't served by Apache. This is also a good approach for configuration files that contain secrets.
For a website with more complex requirements, you may want to look into the use of Access Control Lists. These enable much more sophisticated control of privileges.
If your website has complex requirements, you may want to write a script that sets up all of the permissions. Test it thoroughly, then keep it safe. It could be worth its weight in gold if you ever find yourself needing to rebuild your website for some reason.
You are experiencing a denial of service attack. If you see traffic coming from multiple networks (different IPs on different subnets) you've got a distributed denial of service (DDoS); if it's all coming from the same place you have a plain old DoS. It can be helpful to check, if you are able; use netstat to check. This might be hard to do, though.
Denial of service usually falls into a couple categories: traffic-based, and load-based. The last item (with the crashing service) is exploit-based DoS and is quite different.
If you're trying to pin down what type of attack is happening, you may want to capture some traffic (using wireshark, tcpdump, or libpcap). You should, if possible, but also be aware that you will probably capture quite a lot of traffic.
As often as not, these will come from botnets (networks of compromised hosts under the central control of some attacker, whose bidding they will do). This is a good way for the attacker to (very cheaply) acquire the upstream bandwidth of lots of different hosts on different networks to attack you with, while covering their tracks. The Low Orbit Ion Cannon is one example of a botnet (despite being voluntary instead of malware-derived); Zeus is a more typical one.
Traffic-based
If you're under a traffic-based DoS, you're finding that there is just so much traffic coming to your server that its connection to the Internet is completely saturated. There is a high packet loss rate when pinging your server from elsewhere, and (depending on routing methods in use) sometimes you're also seeing really high latency (the ping is high). This kind of attack is usually a DDoS.
While this is a really "loud" attack, and it's obvious what is going on, it's hard for a server administrator to mitigate (and basically impossible for a user of shared hosting to mitigate). You're going to need help from your ISP; let them know you're under a DDoS and they might be able to help.
However, most ISPs and transit providers will proactively realize what is going on and publish a blackhole route for your server. What this means is that they publish a route to your server with as little cost as possible, via 0.0.0.0
: they make traffic to your server no longer routeable on the Internet. These routes are typically /32s and eventually they are removed. This doesn't help you at all; the purpose is to protect the ISP's network from the deluge. For the duration, your server will effectively lose Internet access.
The only way your ISP (or you, if you have your own AS) is going to be able to help is if they are using intelligent traffic shapers that can detect and rate-limit probable DDoS traffic. Not everyone has this technology. However, if the traffic is coming from one or two networks, or one host, they might also be able to block the traffic ahead of you.
In short, there is very little you can do about this problem. The best long-term solution is to host your services in many different locations on the Internet which would have to be DDoSed individually and simultaneously, making the DDoS much more expensive. Strategies for this depend on the service you need to protect; DNS can be protected with multiple authoritative nameservers, SMTP with backup MX records and mail exchangers, and HTTP with round-robin DNS or multihoming (but some degradation might be noticeable for the duration anyway).
Load balancers are rarely an effective solution to this problem, because the load balancer itself is subject to the same problem and merely creates a bottleneck. IPTables or other firewall rules will not help because the problem is that your pipe is saturated. Once the connections are seen by your firewall, it is already too late; the bandwidth into your site has been consumed. It doesn't matter what you do with the connections; the attack is mitigated or finished when the amount of incoming traffic goes back down to normal.
If you are able to do so, consider using a content distribution network (CDN) like Akamai, Limelight and CDN77, or use a DDoS scrubbing service like CloudFlare or Prolexic. These services take active measures to mitigate these types of attacks, and also have so much available bandwidth in so many different places that flooding them is not generally feasible.
If you decide to use CloudFlare (or any other CDN/proxy) remember to hide your server's IP. If an attacker finds out the IP, he can again DDoS your server directly, bypassing CloudFlare. To hide the IP, your server should never communicate directly with other servers/users unless they are safe. For example your server should not send emails directly to users. This doesn't apply if you host all your content on the CDN and don't have a server of your own.
Also, some VPS and hosting providers are better at mitigating these attacks than others. In general, the larger they are, the better they will be at this; a provider which is very well-peered and has lots of bandwidth will be naturally more resilient, and one with an active and fully staffed network operations team will be able to react more quickly.
Load-based
When you are experiencing a load-based DDoS, you notice that the load average is abnormally high (or CPU, RAM, or disk usage, depending on your platform and the specifics). Although the server doesn't appear to be doing anything useful, it is very busy. Often, there will be copious amounts of entries in the logs indicating unusual conditions. More often than not this is coming from a lot of different places and is a DDoS, but that isn't necessarily the case. There don't even have to be a lot of different hosts.
This attack is based on making your service do a lot of expensive stuff. This could be something like opening a gargantuan number of TCP connections and forcing you to maintain state for them, or uploading excessively large or numerous files to your service, or perhaps doing really expensive searches, or really doing anything that is expensive to handle. The traffic is within the limits of what you planned for and can take on, but the types of requests being made are too expensive to handle so many of.
Firstly, that this type of attack is possible is often indicative of a configuration issue or bug in your service. For instance, you may have overly verbose logging turned on, and may be storing logs on something that's very slow to write to. If someone realizes this and does a lot of something which causes you to write copious amounts of logs to disk, your server will slow to a crawl. Your software might also be doing something extremely inefficient for certain input cases; the causes are as numerous as there are programs, but two examples would be a situation that causes your service to not close a session that is otherwise finished, and a situation that causes it to spawn a child process and leave it. If you end up with tens of thousands of open connections with state to keep track of, or tens of thousands of child processes, you'll run into trouble.
The first thing you might be able to do is use a firewall to drop the traffic. This isn't always possible, but if there is a characteristic you can find in the incoming traffic (tcpdump can be nice for this if the traffic is light), you can drop it at the firewall and it will no longer cause trouble. The other thing to do is to fix the bug in your service (get in touch with the vendor and be prepared for a long support experience).
However, if it's a configuration issue, start there. Turn down logging on production systems to a reasonable level (depending on the program this is usually the default, and will usually involve making sure "debug" and "verbose" levels of logging are off; if everything a user does is logged in exact and fine detail, your logging is too verbose). Additionally, check child process and request limits, possibly throttle incoming requests, connections per IP, and the number of allowed child processes, as applicable.
It goes without saying that the better configured and better provisioned your server is, the harder this type of attack will be. Avoid being stingy with RAM and CPU in particular. Ensure your connections to things like backend databases and disk storage are fast and reliable.
Exploit-based
If your service mysteriously crashes extremely quickly after being brought up, particularly if you can establish a pattern of requests that precede the crash and the request is atypical or doesn't match expected use patterns, you might be experiencing an exploit-based DoS. This can come from as few as just one host (with pretty much any type of internet connection), or many hosts.
This is similar to a load-based DoS in many respects, and has basically the same causes and mitigations. The difference is merely that in this case, the bug doesn't cause your server to be wasteful, but to die. The attacker is usually exploiting a remote crash vulnerability, such as garbled input that causes a null-dereference or something in your service.
Handle this similarly to an unauthorized remote access attack. Firewall against the originating hosts and type of traffic if they can be pinned down. Use validating reverse proxies if applicable. Gather forensic evidence (try and capture some of the traffic), file a bug ticket with the vendor, and consider filing an abuse complaint (or legal complaint) against the origin too.
These attacks are fairly cheap to mount, if an exploit can be found, and they can be very potent, but also relatively easy to track down and stop. However, techniques that are useful against traffic-based DDoS are generally useless against exploit-based DoS.
Best Answer
Take a look at the top IP addresses. If any stand out from the others, those would be the ones to firewall.
This will look at the currently active connections to see if there are any IPs connecting to port 80. You might need to alter the cut -c 45- as the IP address may not start at column 45. If someone was doing a UDP flood to your webserver, this would pick it up as well.
On the off chance that neither of these show any IPs that are excessively out of the norm, you would need to assume that you have a botnet attacking you and would need to look for particular patterns in the logs to see what they are doing. A common attack against wordpress sites is:
If you look through the access logs for your website, you might be able to do something like:
which would show you the most commonly hit URLs. You might find that they are hitting a particular script rather than loading the entire site.
would allow you to see common UserAgents. It is possible that they are using a single UserAgent in their attack.
The trick is to find something in common with the attack traffic that doesn't exist in your normal traffic and then filter that through iptables, mod_rewrite or upstream with your webhost. If you are getting hit with Slowloris, Apache 2.2.15 now has the reqtimeout module which allows you to configure some settings to better protect against Slowloris.