It's hard to give specific advice from what you've posted here but I do have some generic advice based on a post I wrote ages ago back when I could still be bothered to blog.
Don't Panic
First things first, there are no "quick fixes" other than restoring your system from a backup taken prior to the intrusion, and this has at least two problems.
- It's difficult to pinpoint when the intrusion happened.
- It doesn't help you close the "hole" that allowed them to break in last time, nor deal with the consequences of any "data theft" that may also have taken place.
This question keeps being asked repeatedly by the victims of hackers breaking into their web server. The answers very rarely change, but people keep asking the question. I'm not sure why. Perhaps people just don't like the answers they've seen when searching for help, or they can't find someone they trust to give them advice. Or perhaps people read an answer to this question and focus too much on the 5% of why their case is special and different from the answers they can find online and miss the 95% of the question and answer where their case is near enough the same as the one they read online.
That brings me to the first important nugget of information. I really do appreciate that you are a special unique snowflake. I appreciate that your website is too, as it's a reflection of you and your business or at the very least, your hard work on behalf of an employer. But to someone on the outside looking in, whether a computer security person looking at the problem to try and help you or even the attacker himself, it is very likely that your problem will be at least 95% identical to every other case they've ever looked at.
Don't take the attack personally, and don't take the recommendations that follow here or that you get from other people personally. If you are reading this after just becoming the victim of a website hack then I really am sorry, and I really hope you can find something helpful here, but this is not the time to let your ego get in the way of what you need to do.
You have just found out that your server(s) got hacked. Now what?
Do not panic. Absolutely do not act in haste, and absolutely do not try and pretend things never happened and not act at all.
First: understand that the disaster has already happened. This is not the time for denial; it is the time to accept what has happened, to be realistic about it, and to take steps to manage the consequences of the impact.
Some of these steps are going to hurt, and (unless your website holds a copy of my details) I really don't care if you ignore all or some of these steps, that's up to you. But following them properly will make things better in the end. The medicine might taste awful but sometimes you have to overlook that if you really want the cure to work.
Stop the problem from becoming worse than it already is:
- The first thing you should do is disconnect the affected systems from the Internet. Whatever other problems you have, leaving the system connected to the web will only allow the attack to continue. I mean this quite literally; get someone to physically visit the server and unplug network cables if that is what it takes, but disconnect the victim from its muggers before you try to do anything else.
- Change all your passwords for all accounts on all computers that are on the same network as the compromised systems. No really. All accounts. All computers. Yes, you're right, this might be overkill; on the other hand, it might not. You don't know either way, do you?
- Check your other systems. Pay special attention to other Internet facing services, and to those that hold financial or other commercially sensitive data.
- If the system holds anyone's personal data, immediately inform the person responsible for data protection (if that's not you) and URGE a full disclosure. I know this one is tough. I know this one is going to hurt. I know that many businesses want to sweep this kind of problem under the carpet but the business is going to have to deal with it - and needs to do so with an eye on any and all relevant privacy laws.
However annoyed your customers might be to have you tell them about a problem, they'll be far more annoyed if you don't tell them, and they only find out for themselves after someone charges $8,000 worth of goods using the credit card details they stole from your site.
Remember what I said previously? The bad thing has already happened. The only question now is how well you deal with it.
Understand the problem fully:
- Do NOT put the affected systems back online until this stage is fully complete, unless you want to be the person whose post was the tipping point for me actually deciding to write this article. I'm not going to link to that post so that people can get a cheap laugh, but the real tragedy is when people fail to learn from their mistakes.
- Examine the 'attacked' systems to understand how the attacks succeeded in compromising your security. Make every effort to find out where the attacks "came from", so that you understand what problems you have and need to address to make your system safe in the future.
- Examine the 'attacked' systems again, this time to understand where the attacks went, so that you understand what systems were compromised in the attack. Ensure you follow up any pointers that suggest compromised systems could become a springboard to attack your systems further.
- Ensure the "gateways" used in any and all attacks are fully understood, so that you may begin to close them properly. (e.g. if your systems were compromised by a SQL injection attack, then not only do you need to close the particular flawed line of code that they broke in by, you would want to audit all of your code to see if the same type of mistake was made elsewhere).
- Understand that attacks might succeed because of more than one flaw. Often, attacks succeed not through finding one major bug in a system but by stringing together several issues (sometimes minor and trivial by themselves) to compromise a system. For example, using SQL injection attacks to send commands to a database server, discovering the website/application you're attacking is running in the context of an administrative user and using the rights of that account as a stepping-stone to compromise other parts of a system. Or as hackers like to call it: "another day in the office taking advantage of common mistakes people make".
Why not just "repair" the exploit or rootkit you've detected and put the system back online?
In situations like this the problem is that you don't have control of that system any more. It's not your computer any more.
The only way to be certain that you've got control of the system is to rebuild the system. While there's a lot of value in finding and fixing the exploit used to break into the system, you can't be sure about what else has been done to the system once the intruders gained control (indeed, its not unheard of for hackers that recruit systems into a botnet to patch the exploits they used themselves, to safeguard "their" new computer from other hackers, as well as installing their rootkit).
Make a plan for recovery and to bring your website back online and stick to it:
Nobody wants to be offline for longer than they have to be. That's a given. If this website is a revenue generating mechanism then the pressure to bring it back online quickly will be intense. Even if the only thing at stake is your / your company's reputation, this is still going generate a lot of pressure to put things back up quickly.
However, don't give in to the temptation to go back online too quickly. Instead move with as fast as possible to understand what caused the problem and to solve it before you go back online or else you will almost certainly fall victim to an intrusion once again, and remember, "to get hacked once can be classed as misfortune; to get hacked again straight afterward looks like carelessness" (with apologies to Oscar Wilde).
- I'm assuming you've understood all the issues that led to the successful intrusion in the first place before you even start this section. I don't want to overstate the case but if you haven't done that first then you really do need to. Sorry.
- Never pay blackmail / protection money. This is the sign of an easy mark and you don't want that phrase ever used to describe you.
- Don't be tempted to put the same server(s) back online without a full rebuild. It should be far quicker to build a new box or "nuke the server from orbit and do a clean install" on the old hardware than it would be to audit every single corner of the old system to make sure it is clean before putting it back online again. If you disagree with that then you probably don't know what it really means to ensure a system is fully cleaned, or your website deployment procedures are an unholy mess. You presumably have backups and test deployments of your site that you can just use to build the live site, and if you don't then being hacked is not your biggest problem.
- Be very careful about re-using data that was "live" on the system at the time of the hack. I won't say "never ever do it" because you'll just ignore me, but frankly I think you do need to consider the consequences of keeping data around when you know you cannot guarantee its integrity. Ideally, you should restore this from a backup made prior to the intrusion. If you cannot or will not do that, you should be very careful with that data because it's tainted. You should especially be aware of the consequences to others if this data belongs to customers or site visitors rather than directly to you.
- Monitor the system(s) carefully. You should resolve to do this as an ongoing process in the future (more below) but you take extra pains to be vigilant during the period immediately following your site coming back online. The intruders will almost certainly be back, and if you can spot them trying to break in again you will certainly be able to see quickly if you really have closed all the holes they used before plus any they made for themselves, and you might gather useful information you can pass on to your local law enforcement.
Reducing the risk in the future.
The first thing you need to understand is that security is a process that you have to apply throughout the entire life-cycle of designing, deploying and maintaining an Internet-facing system, not something you can slap a few layers over your code afterwards like cheap paint. To be properly secure, a service and an application need to be designed from the start with this in mind as one of the major goals of the project. I realise that's boring and you've heard it all before and that I "just don't realise the pressure man" of getting your beta web2.0 (beta) service into beta status on the web, but the fact is that this keeps getting repeated because it was true the first time it was said and it hasn't yet become a lie.
You can't eliminate risk. You shouldn't even try to do that. What you should do however is to understand which security risks are important to you, and understand how to manage and reduce both the impact of the risk and the probability that the risk will occur.
What steps can you take to reduce the probability of an attack being successful?
For example:
- Was the flaw that allowed people to break into your site a known bug in vendor code, for which a patch was available? If so, do you need to re-think your approach to how you patch applications on your Internet-facing servers?
- Was the flaw that allowed people to break into your site an unknown bug in vendor code, for which a patch was not available? I most certainly do not advocate changing suppliers whenever something like this bites you because they all have their problems and you'll run out of platforms in a year at the most if you take this approach. However, if a system constantly lets you down then you should either migrate to something more robust or at the very least, re-architect your system so that vulnerable components stay wrapped up in cotton wool and as far away as possible from hostile eyes.
- Was the flaw a bug in code developed by you (or a contractor working for you)? If so, do you need to re-think your approach to how you approve code for deployment to your live site? Could the bug have been caught with an improved test system, or with changes to your coding "standard" (for example, while technology is not a panacea, you can reduce the probability of a successful SQL injection attack by using well-documented coding techniques).
- Was the flaw due to a problem with how the server or application software was deployed? If so, are you using automated procedures to build and deploy servers where possible? These are a great help in maintaining a consistent "baseline" state on all your servers, minimising the amount of custom work that has to be done on each one and hence hopefully minimising the opportunity for a mistake to be made. Same goes with code deployment - if you require something "special" to be done to deploy the latest version of your web app then try hard to automate it and ensure it always is done in a consistent manner.
- Could the intrusion have been caught earlier with better monitoring of your systems? Of course, 24-hour monitoring or an "on call" system for your staff might not be cost effective, but there are companies out there who can monitor your web facing services for you and alert you in the event of a problem. You might decide you can't afford this or don't need it and that's just fine... just take it into consideration.
- Use tools such as tripwire and nessus where appropriate - but don't just use them blindly because I said so. Take the time to learn how to use a few good security tools that are appropriate to your environment, keep these tools updated and use them on a regular basis.
- Consider hiring security experts to 'audit' your website security on a regular basis. Again, you might decide you can't afford this or don't need it and that's just fine... just take it into consideration.
What steps can you take to reduce the consequences of a successful attack?
If you decide that the "risk" of the lower floor of your home flooding is high, but not high enough to warrant moving, you should at least move the irreplaceable family heirlooms upstairs. Right?
- Can you reduce the amount of services directly exposed to the Internet? Can you maintain some kind of gap between your internal services and your Internet-facing services? This ensures that even if your external systems are compromised the chances of using this as a springboard to attack your internal systems are limited.
- Are you storing information you don't need to store? Are you storing such information "online" when it could be archived somewhere else. There are two points to this part; the obvious one is that people cannot steal information from you that you don't have, and the second point is that the less you store, the less you need to maintain and code for, and so there are fewer chances for bugs to slip into your code or systems design.
- Are you using "least access" principles for your web app? If users only need to read from a database, then make sure the account the web app uses to service this only has read access, don't allow it write access and certainly not system-level access.
- If you're not very experienced at something and it is not central to your business, consider outsourcing it. In other words, if you run a small website talking about writing desktop application code and decide to start selling small desktop applications from the site then consider "outsourcing" your credit card order system to someone like Paypal.
- If at all possible, make practicing recovery from compromised systems part of your Disaster Recovery plan. This is arguably just another "disaster scenario" that you could encounter, simply one with its own set of problems and issues that are distinct from the usual 'server room caught fire'/'was invaded by giant server eating furbies' kind of thing.
... And finally
I've probably left out no end of stuff that others consider important, but the steps above should at least help you start sorting things out if you are unlucky enough to fall victim to hackers.
Above all: Don't panic. Think before you act. Act firmly once you've made a decision, and leave a comment below if you have something to add to my list of steps.
Best Answer
You are experiencing a denial of service attack. If you see traffic coming from multiple networks (different IPs on different subnets) you've got a distributed denial of service (DDoS); if it's all coming from the same place you have a plain old DoS. It can be helpful to check, if you are able; use netstat to check. This might be hard to do, though.
Denial of service usually falls into a couple categories: traffic-based, and load-based. The last item (with the crashing service) is exploit-based DoS and is quite different.
If you're trying to pin down what type of attack is happening, you may want to capture some traffic (using wireshark, tcpdump, or libpcap). You should, if possible, but also be aware that you will probably capture quite a lot of traffic.
As often as not, these will come from botnets (networks of compromised hosts under the central control of some attacker, whose bidding they will do). This is a good way for the attacker to (very cheaply) acquire the upstream bandwidth of lots of different hosts on different networks to attack you with, while covering their tracks. The Low Orbit Ion Cannon is one example of a botnet (despite being voluntary instead of malware-derived); Zeus is a more typical one.
Traffic-based
If you're under a traffic-based DoS, you're finding that there is just so much traffic coming to your server that its connection to the Internet is completely saturated. There is a high packet loss rate when pinging your server from elsewhere, and (depending on routing methods in use) sometimes you're also seeing really high latency (the ping is high). This kind of attack is usually a DDoS.
While this is a really "loud" attack, and it's obvious what is going on, it's hard for a server administrator to mitigate (and basically impossible for a user of shared hosting to mitigate). You're going to need help from your ISP; let them know you're under a DDoS and they might be able to help.
However, most ISPs and transit providers will proactively realize what is going on and publish a blackhole route for your server. What this means is that they publish a route to your server with as little cost as possible, via
0.0.0.0
: they make traffic to your server no longer routeable on the Internet. These routes are typically /32s and eventually they are removed. This doesn't help you at all; the purpose is to protect the ISP's network from the deluge. For the duration, your server will effectively lose Internet access.The only way your ISP (or you, if you have your own AS) is going to be able to help is if they are using intelligent traffic shapers that can detect and rate-limit probable DDoS traffic. Not everyone has this technology. However, if the traffic is coming from one or two networks, or one host, they might also be able to block the traffic ahead of you.
In short, there is very little you can do about this problem. The best long-term solution is to host your services in many different locations on the Internet which would have to be DDoSed individually and simultaneously, making the DDoS much more expensive. Strategies for this depend on the service you need to protect; DNS can be protected with multiple authoritative nameservers, SMTP with backup MX records and mail exchangers, and HTTP with round-robin DNS or multihoming (but some degradation might be noticeable for the duration anyway).
Load balancers are rarely an effective solution to this problem, because the load balancer itself is subject to the same problem and merely creates a bottleneck. IPTables or other firewall rules will not help because the problem is that your pipe is saturated. Once the connections are seen by your firewall, it is already too late; the bandwidth into your site has been consumed. It doesn't matter what you do with the connections; the attack is mitigated or finished when the amount of incoming traffic goes back down to normal.
If you are able to do so, consider using a content distribution network (CDN) like Akamai, Limelight and CDN77, or use a DDoS scrubbing service like CloudFlare or Prolexic. These services take active measures to mitigate these types of attacks, and also have so much available bandwidth in so many different places that flooding them is not generally feasible.
If you decide to use CloudFlare (or any other CDN/proxy) remember to hide your server's IP. If an attacker finds out the IP, he can again DDoS your server directly, bypassing CloudFlare. To hide the IP, your server should never communicate directly with other servers/users unless they are safe. For example your server should not send emails directly to users. This doesn't apply if you host all your content on the CDN and don't have a server of your own.
Also, some VPS and hosting providers are better at mitigating these attacks than others. In general, the larger they are, the better they will be at this; a provider which is very well-peered and has lots of bandwidth will be naturally more resilient, and one with an active and fully staffed network operations team will be able to react more quickly.
Load-based
When you are experiencing a load-based DDoS, you notice that the load average is abnormally high (or CPU, RAM, or disk usage, depending on your platform and the specifics). Although the server doesn't appear to be doing anything useful, it is very busy. Often, there will be copious amounts of entries in the logs indicating unusual conditions. More often than not this is coming from a lot of different places and is a DDoS, but that isn't necessarily the case. There don't even have to be a lot of different hosts.
This attack is based on making your service do a lot of expensive stuff. This could be something like opening a gargantuan number of TCP connections and forcing you to maintain state for them, or uploading excessively large or numerous files to your service, or perhaps doing really expensive searches, or really doing anything that is expensive to handle. The traffic is within the limits of what you planned for and can take on, but the types of requests being made are too expensive to handle so many of.
Firstly, that this type of attack is possible is often indicative of a configuration issue or bug in your service. For instance, you may have overly verbose logging turned on, and may be storing logs on something that's very slow to write to. If someone realizes this and does a lot of something which causes you to write copious amounts of logs to disk, your server will slow to a crawl. Your software might also be doing something extremely inefficient for certain input cases; the causes are as numerous as there are programs, but two examples would be a situation that causes your service to not close a session that is otherwise finished, and a situation that causes it to spawn a child process and leave it. If you end up with tens of thousands of open connections with state to keep track of, or tens of thousands of child processes, you'll run into trouble.
The first thing you might be able to do is use a firewall to drop the traffic. This isn't always possible, but if there is a characteristic you can find in the incoming traffic (tcpdump can be nice for this if the traffic is light), you can drop it at the firewall and it will no longer cause trouble. The other thing to do is to fix the bug in your service (get in touch with the vendor and be prepared for a long support experience).
However, if it's a configuration issue, start there. Turn down logging on production systems to a reasonable level (depending on the program this is usually the default, and will usually involve making sure "debug" and "verbose" levels of logging are off; if everything a user does is logged in exact and fine detail, your logging is too verbose). Additionally, check child process and request limits, possibly throttle incoming requests, connections per IP, and the number of allowed child processes, as applicable.
It goes without saying that the better configured and better provisioned your server is, the harder this type of attack will be. Avoid being stingy with RAM and CPU in particular. Ensure your connections to things like backend databases and disk storage are fast and reliable.
Exploit-based
If your service mysteriously crashes extremely quickly after being brought up, particularly if you can establish a pattern of requests that precede the crash and the request is atypical or doesn't match expected use patterns, you might be experiencing an exploit-based DoS. This can come from as few as just one host (with pretty much any type of internet connection), or many hosts.
This is similar to a load-based DoS in many respects, and has basically the same causes and mitigations. The difference is merely that in this case, the bug doesn't cause your server to be wasteful, but to die. The attacker is usually exploiting a remote crash vulnerability, such as garbled input that causes a null-dereference or something in your service.
Handle this similarly to an unauthorized remote access attack. Firewall against the originating hosts and type of traffic if they can be pinned down. Use validating reverse proxies if applicable. Gather forensic evidence (try and capture some of the traffic), file a bug ticket with the vendor, and consider filing an abuse complaint (or legal complaint) against the origin too.
These attacks are fairly cheap to mount, if an exploit can be found, and they can be very potent, but also relatively easy to track down and stop. However, techniques that are useful against traffic-based DDoS are generally useless against exploit-based DoS.