You can host multiple websites on a single IP through the magic of name based virtual hosting. Whether or not softlayer are "cheating" depends entirely on what precisely they've agreed to provide, and that's something you'd need to take up with them. If their tech support is at all reasonable, then they should be able to clear up any confusion with you, and if they can't, then you perhaps need to reconsider whether they're suitably competent to provide you with hosting services.
As stated by many others, IP headers are trivial to forge, as long as one doesn't care about receiving a response. This is why it is mostly seen with UDP, as TCP requires a 3-way handshake. One notable exception is the SYN flood, which uses TCP and attempts to tie up resources on a receiving host; again, as the replies are discarded, the source address does not matter.
A particularly nasty side-effect of the ability of attackers to spoof source addresses is a backscatter attack. There is an excellent description here, but briefly, it is the inverse of a traditional DDoS attack:
- Gain control of a botnet.
- Configure all your nodes to use the same source IP address for malicious packets. This IP address will be your eventual victim.
- Send packets from all of your controlled nodes to various addresses across the internet, targeting ports that generally are not open, or connecting to valid ports (TCP/80) claiming to be part of an already existing transaction.
In either of the cases mentioned in (3), many hosts will respond with an ICMP unreachable or a TCP reset, targeted at the source address of the malicious packet. The attacker now has potentially thousands of uncompromised machines on the network performing a DDoS attack on his/her chosen victim, all through the use of a spoofed source IP address.
In terms of mitigation, this risk is really one that only ISPs (and particularly ISPs providing customer access, rather than transit) can address. There are two main methods of doing this:
Ingress filtering - ensuring packets coming in to your network are sourced from address ranges that live on the far side of the incoming interface. Many router vendors implement features such as unicast reverse path forwarding, which use the router's routing and forwarding tables to verify that the next hop of the source address of an incoming packet is the incoming interface. This is best performed at the first layer 3 hop in the network (i.e. your default gateway.)
Egress filtering - ensuring that packets leaving your network only source from address ranges you own. This is the natural complement to ingress filtering, and is essentially part of being a 'good neighbor'; ensuring that even if your network is compromised by malicious traffic, that traffic is not forwarded to networks you peer with.
Both of these techniques are most effective and easily implemented when done so in 'edge' or 'access' networks, where clients interface with the provider. Implementing ingress/egress filtering above the access layer becomes more difficult, due to the complexities of multiple paths and asymmetric routing.
I have seen these techniques (particularly ingress filtering) used to great effect within an enterprise network. Perhaps someone with more service provider experience can give more insight into the challenges of deploying ingress/egress filtering on the internet at large. I imagine hardware/firmware support to be a big challenge, as well as being unable to force upstream providers in other countries to implement similar policies...
Best Answer
primary and secondary dns server on the same box? not so good idea...
some possible usage: