First and foremost, there is nothing to fear from being on a public IP allocation, so long as your security devices are configured right.
What should I be replacing NAT with, if we don't have physically separate networks?
The same thing we've been physically separating them with since the 1980's, routers and firewalls. The one big security gain you get with NAT is that it forces you into a default-deny configuration. In order to get any service through it, you have to explicitly punch holes. The fancier devices even allow you to apply IP-based ACLs to those holes, just like a firewall. Probably because they have 'Firewall' on the box, actually.
A correctly configured firewall provides exactly the same service as a NAT gateway. NAT gateways are frequently used because they're easier to get into a secure config than most firewalls.
I hear that IPv6 and IPSEC are supposed to make all this secure somehow, but without physically separated networks that make these devices invisible to the Internet, I really can't see how.
This is a misconception. I work for a University that has a /16 IPv4 allocation, and the vast, vast majority of our IP address consumption is on that public allocation. Certainly all of our end-user workstations and printers. Our RFC1918 consumption is limited to network devices and certain specific servers where such addresses are required. I would not be surprised if you just shivered just now, because I certainly did when I showed up on my first day and saw the post-it on my monitor with my IP address.
And yet, we survive. Why? Because we have an exterior firewall configured for default-deny with limited ICMP throughput. Just because 140.160.123.45 is theoretically routeable, does not mean you can get there from wherever you are on the public internet. This is what firewalls were designed to do.
Given the right router configs, and different subnets in our allocation can be completely unreachable from each other. You do can do this in router tables or firewalls. This is a separate network and has satisfied our security auditors in the past.
There's no way in hell I'll put our billing database (With lots of credit card information!) on the internet for everyone to see.
Our billing database is on a public IPv4 address, and has been for its entire existence, but we have proof you can't get there from here. Just because an address is on the public v4 routeable list does not mean it is guaranteed to be delivered. The two firewalls between the evils of the Internet and the actual database ports filter out the evil. Even from my desk, behind the first firewall, I can't get to that database.
Credit-card information is one special case. That's subject to the PCI-DSS standards, and the standards state directly that servers that contain such data have to be behind a NAT gateway1. Ours are, and these three servers represent our total server usage of RFC1918 addresses. It doesn't add any security, just a layer of complexity, but we need to get that checkbox checked for audits.
The original "IPv6 makes NAT a thing of the past" idea was put forward before the Internet boom really hit full mainstream. In 1995 NAT was a workaround for getting around a small IP allocation. In 2005 it was enshrined in many Security Best Practices document, and at least one major standard (PCI-DSS to be specific). The only concrete benefit NAT gives is that an external entity performing recon on the network doesn't know what the IP landscape looks like behind the NAT device (though thanks to RFC1918 they have a good guess), and on NAT-free IPv4 (such as my work) that isn't the case. It's a small step in defense-in-depth, not a big one.
The replacement for RFC1918 addresses are what are called Unique Local Addresses. Like RFC1918, they don't route unless peers specifically agree to let them route. Unlike RFC1918, they are (probably) globally unique. IPv6 address translators that translate a ULA to a Global IP do exist in the higher range perimeter gear, definitely not in the SOHO gear yet.
You can survive just fine with a public IP address. Just keep in mind that 'public' does not guarantee 'reachable', and you'll be fine.
2017 update
In the past few months, Amazon aws has been adding IPv6 support. It has just been added to their amazon-vpc offering, and their implementation gives some clues as to how large scale deployments are expected to be done.
- You are given a /56 allocation (256 subnets).
- The allocation is a fully routeable subnet.
- You are expected to set your firewall-rules (security-groups) appropriately restrictive.
- There is no NAT, it's not even offered, so all outbound traffic will come from the actual IP address of the instance.
To add one of the security benefits of NAT back in, they are now offering an Egress-only Internet Gateway. This offers one NAT-like benefit:
- Subnets behind it can't be directly accessed from the internet.
Which provides a layer of defense-in-depth, in case a misconfigred firewall rule accidentally allows inbound traffic.
This offering does not translate the internal address into a single address the way NAT does. Outbound traffic will still have the source IP of the instance that opened the connection. Firewall operators looking to whitelist resources in the VPC will be better off whitelisting netblocks, rather than specific IP addresses.
Routeable does not always mean reachable.
1: The PCI-DSS standards changed in October 2010, the statement mandating RFC1918 addresses was removed, and 'network isolation' replaced it.
The short answer: On a system that you are running radvd
on, you want to configure the interface using the same method as you use to configure radvd
; if radvd.conf
is statically generated, then so should your local Ethernet interface's IPv6 address be statically generated. But, all is not lost; read on for more detail.
What you can do is use a small shell script to configure both. Let's say for a moment that you have a dynamically assigned global IPv4 address, and this is the only IPv4 address on your interface; you can use the following shell script snippet to obtain the IPv6 /48 prefix (note: code adapted from ARIN:
IPV4=$(ip addr ls eth0 | grep 'inet ' | awk '{ print $2 }' | cut -f1 -d/)
PARTS=`echo $IPV4 | tr . ' '`
PREFIX48=`printf "2002:%02x%02x:%02x%02x" $PARTS`
Now, you have the /48 prefix; getting a /64 prefix is simple enough, since you can just append it to the $PREFIX48
variable.
Now, all that would be left for you to do is write the script that writes out the network interface configuration and radvd configuration (presumably, from a template for each of them) and make that script run before your network configuration does. I'll not be including that code here as I do not know what distribution you are using, and it differs depending on that.
Hope this helps.
Best Answer
IPv4
The reason IPv4 behaves the way you have observed is that it wasn't originally intended to support more than one IP address on an interface. Hence the DHCP client will just wait for the first DHCP server to send an offer, and the client will pretty much assume that's the only DHCP server.
IPv6
IPv6 however is intended to support many addresses on an interface, so the machine can accept router advertisements from multiple routers and assign multiple IP addresses to the interface.
However unless you start using fancy setups such as policy routing, you will have a routing table which only consider the destination address for routing decisions and will have just one default route pointing to the one of the two routers.
So you can expect packets to be routed to the same of the two routers regardless of which source address the client is using. The choice of source address will consider a number of factors including whether the source address will be one of those assigned to the interface which the packet will eventually be routed through. That however doesn't help here since the client will use the same interface to reach both routers.
Thus it's possible that the client machine ends up choosing a source IP address in the range assigned by one router but will send the packets to the other router. This would have worked if it wasn't for packet filters on the way. Even if you can somehow find a solution for those packet filters, it still isn't an ideal setup as you wouldn't really get a redundant setup when outgoing traffic goes through one router and incoming traffic goes through the other.
There isn't enough information in your question to say for certain that this is why it fails for you. However given the information you have provided it does sound like the most likely explanation.
Simple solution
On the short term the simplest solution I can suggest for how you get this setup to work and give you redundancy is to configure one of the routers as IPv4-only and the other as IPv6-only.
Clients which implement RFC 6555 will automatically fail over between IPv4 and IPv6, so with IPv4 and IPv6 going through different routers it would fail over between those two routers.
This will work for reaching services which are dual stack. You can achieve the same failover for IPv4-only services by using DNS64+NAT64. If your ISP doesn't provide NAT64 for you, you could configure NAT64 on the router you choose to use for IPv6 connectivity.
Long term solution
On a long term you presumably want redundancy even when accessing IPv6-only services. The best way to achieve that is if the client machine will automatically choose between the two routers. That means logic similar to RFC 6555 need to choose between two IPv6 connections.
I haven't seen software supporting that today. Some parts of it can be done with policy routing, but that means you need to override routing tables otherwise produced by auto configuration on each client machine. A simpler and slightly worse solution is to only configure the policy routing on the routers at the cost of having outgoing traffic often being sent from client to the wrong router and then taking an extra hop between the two routers.
But to get the full benefit you need software which does all of this: