If you are interested, I have a patch for keepalived which allows it to use unicast between a local and a remote VIP. I've successfully been using it at vps.net between virtual machines.
It's a lot simpler than trying to set up a tunnel ! I've uploaded it there :
http://1wt.eu/keepalived/
You then just have to specify "vrrp_unicast_bind " and "vrrp_unicast_peer ". It will still use the VRRP protocol, but only between those IPs.
Hoping this helps !
Once again, I've managed to tinker through the problem (but not for too long as the original question and this answer supposes it to be :). I've been through almost a month researching the solution for the problem, and I'll leave it documented here just in case anyone happens to bump at the same problem.
Actually, the loopback interface is really what I knew it to be: an address assigned to a dummy, always up interface on a machine. The connectivity problem between the remote GRE router and my router was due to another problem: GRE keep-alive packets.
It turned out that the remote Cisco router was actually sending me odd GRE-encapsulated packets through the tunnel. These packets encapsulated another GRE packet, and these, on the other hand, carried a protocol number of zero. A quick browse indicated that these packets are GRE keep-alive packets, which are send periodically (in my case, every 10 seconds almost exactly) and, if properly deencapsulated and rerouted by the peer, should be echoed back to the sender, since the innermost destination address contained the sender's source address.
The fact is that the Linux kernel did not properly feed the deencapsulated keep-alive packet again into the routing chain. If it did, the packet would be rerouted back to the sender without further complications. Instead, it delivered the packet to userspace, so that it was possible to write a simple program that listened to such packets in raw mode, and echoed them back to the sender. Running this program and echoing back a couple of packets to the Cisco router, the GRE tunnel went 'up' on the remote side, the PIM routers exchanged hello
s and I finally could listen to the multicast traffic that I expected to listen to.
I've learned a lot from this experience, specially the part that, when messing with obscure protocols (or, at least, obscure protocol features), you can't simply count at all on peer-knowledge. No single network analyst on the remote side could help me in any aspect in this regard, probably because this behavior was undocumented.
Best Answer
It's kind of late, but if you didn't solve it, here is how:
Given your example just replace:
Public ips:
tunneling with gre:
On A:
On B:
And that's it, you should have a fully functional tunnel and the ability to route ips that are far away from were you want to use them, so you can now start to bind some daemons to those IPs.
Another thing to have in mind is that if you have so many IPs, you've to be careful with your broadcast domain on point A, and if you're planning to tunnel more than 500 IPs, then you've to change the default values of Linux for the arp table in order to keep all entries:
Sources:
I was looking for the same a long time ago and found your post.