Routing – use on a branch SRX cluster instead of an lt-0/0/0 interface

juniperjuniper-junosrouting

Background

I have the following config which uses lt interfaces to route traffic between 2 routing instances (default and a custom one called vpn).

The reason I do this is so that I can have overlapping IP ranges/subnets and just use destination NAT to get round it.

My test device is an SRX 100 and is has the following local IP's

fe-0/0/0.0   - 1.1.1.1/30            - Overlapping IP range
fe-0/0/1.0   - 192.168.218.240/24    - IPSEC to another firewall on 192.168.218.1
fe-0/0/7.0   - 10.10.10.1/24         - Internal LAN
lt-0/0/0.101 - 10.77.77.101/30       - Attached to default routing instance
lt-0/0/0.102 - 10.77.77.102/30       - Attached to VPN routing instance
st0.100      - 10.255.0.100/32       - IPSEC tunnel to remote device

I have a test computer (A) sitting on 10.10.10.2/24 and a test firewall connected via fe-0/0/1.0 on 192.168.218.1

I have an IPSEC tunnel between the two devices, it's a simple route based tunnel that works fine. The issue is that I need to get to the remote devices on the network 1.1.1.0/30 which overlaps with my network on fe-0/0/0.0. Unfortunately it's not an option to re-IP it as this is just a test lab, I have a larger device with 100+ VPN's and lots of overlapping networks that I don't control.

The config I built basically does destination NAT for 2.2.2.0/30 -> 1.1.1.0/30. To get the traffic to route correctly I have a route in my default routing instance that looks like this

inet.0: 11 destinations, 11 routes (11 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.0/30         *[Static/5] 00:31:48
                    > to 10.77.77.102 via lt-0/0/0.101

The nice thing about this is that any traffic routed via the lt interface causes it to hit my NAT rule and the perform a route lookup in my vpn routing instance which then routes the traffic out of the tunnel interface.

Total destination-nat rules: 1
Total referenced IPv4/IPv6 ip-prefixes: 1/0

Destination NAT rule: DNAT-1               Rule-set: t
  Rule-Id                    : 1
  Rule position              : 1
  From routing instance      : vpn
    Destination addresses    : 2.2.2.0         - 2.2.2.3


vpn.inet.0: 5 destinations, 5 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.0/30         *[Static/5] 05:04:34
                    > via st0.100
10.10.10.0/24      *[Static/5] 00:34:52
                    > to 10.77.77.101 via lt-0/0/0.102

This means I can ping 2.2.2.1 from my computer (10.10.10.2) and it will flow correctly through both routing tables via the lt interface and down the tunnel to the remote overlapping network.

Jul 31 17:05:28  Gareth-FW RT_FLOW: RT_FLOW_SESSION_CREATE: session created 10.10.10.2/1->2.2.2.1/21042 icmp 10.10.10.2/1->2.2.2.1/21042 None None 1 allow_all(global) trust trust 8502 N/A(N/A) fe-0/0/7.0 UNKNOWN UNKNOWN UNKNOWN
Jul 31 17:05:28  Gareth-FW RT_FLOW: RT_FLOW_SESSION_CREATE: session created 10.10.10.2/1->2.2.2.1/21042 icmp 10.10.10.2/1->1.1.1.1/21042 None DNAT-1 1 allow_all(global) vpn vpn 8503 N/A(N/A) lt-0/0/0.102 UNKNOWN UNKNOWN UNKNOWN

Finally the actual question

This all works great on my test SRX 100. I would now like to deploy it to my SRX 240 cluster running JUNOS 12.1X46-D10.2 and I instantly find that you can't have an lt interface!

Is there a way I can replicate this config using some other interface or method to route my traffic? I would like to avoid using a physical interface if possible.

Thanks

SRX Config

I've left out the IPSEC config as it's just a simple config

root@Gareth-FW# show interfaces
fe-0/0/0 {
    unit 0 {
        family inet {
            address 172.30.1.100/24 {
                preferred;
            }
            address 1.1.1.1/30;
        }
    }
}
lt-0/0/0 {
    unit 101 {
        encapsulation ethernet;
        peer-unit 102;
        family inet {
            address 10.77.77.101/30;
        }
    }
    unit 102 {
        encapsulation ethernet;
        peer-unit 101;
        family inet {
            address 10.77.77.102/30;
        }
    }
}
fe-0/0/1 {
    unit 0 {
        family inet {
            address 192.168.218.240/24;
        }
    }
}
fe-0/0/7 {
    unit 0 {
        family inet {
            address 10.10.10.1/24;
        }
    }
}
st0 {
    unit 100 {
        family inet {
            address 10.255.0.100/32;
        }
    }
}

root@Gareth-FW# show routing-options
static {
    route 2.2.2.0/30 next-hop 10.77.77.102;
}

root@Gareth-FW# show routing-instances
vpn {
    instance-type virtual-router;
    interface lt-0/0/0.102;
    interface lo0.101; ## 'lo0.101' is not defined
    interface st0.100;
    routing-options {
        static {
            route 1.1.1.0/30 next-hop st0.100;
            route 10.10.10.0/24 next-hop 10.77.77.101;
        }
        router-id 10.200.0.2;
    }
}

root@Gareth-FW# show security nat
destination {
    pool test {
        address 1.1.1.0/30;
    }
    rule-set t {
        from routing-instance vpn;
        rule DNAT-1 {
            match {
                destination-address 2.2.2.0/30;
            }
            then {
                destination-nat {
                    pool {
                        test;
                    }
                }
            }
        }
    }
}

Best Answer

One way would to to use a physical loopback cable and run a /30 or/31 over that. Downside is you lose 2x revenue ports

@Jeff, you'll find that the LT interface is an internal only port not associated with any PIMs etc hence why you can't use it in a cluster.

EDIT: If you do go the physical link route; I suggest you do one per chassis and run it in a RETH ( it should work )