How to connect private users to applications within a trusted network without direct connection to the Server Switch

cisco-nexus-5kcisco-nexus-7kdesign

I have 2 Control Center Sites each with 2 N7Ks in a full mesh design and 2 Nexus 5548UP’s as Internal Server Farm Aggregation and 2 ASA Firewalls hanging off each N5K Agg. Both Sites have a mirror image design. We have users that need direct access to the Internal Server Farm Applications and we also need a security boundary for outbound connection requests from the Internal Server Applications. In addition, I need to host private DMZ’s within the Agg to isolate inbound connection requests from what we classify as lower security zones ( the N7K CORE will using vrf: Global for routes to lower security network subnets).

Typically the user would be considered lower secure zones but this design is for hosting a control system for a large power grid. With this in mind I also do not want to connect the users directly to the N5K Agg to allow SITE1 Server Farm Agg the ability to go down allowing SITE 2 to host the applications (currently we connect users to the same physical switch as the applications). I would like to provide a classic Data Center design where the Users route to the Server Farm from the HA L3 CORE (4 x N7K Full Mesh). However, as they are considered the same security level as the “Internal Servers”, I want to isolate them into a private VPN Cloud hosted on the N7K CORE. As N7K’s support MPLS, this would be the most logical, however, my current design has the L2/L3 boundary for the Internal Servers at the Nexus 5548 Aggregation since the firewalls are also connected there. Nexus 5K’s don’t support MPLS but they do support VRF Lite. The N5Ks are also connected in a full mesh to the local N7K’s at each site.

To utilize all 4 links between the N5K’s and N7K’s I either need to configure pt to pt L3 links which pigeon holes the idea of isolating Internal User traffic from the Core from traffic needing to forwarded out the firewall, or I can utilize FabricPath between the 5K’s and 7K’s and use vrf lite where the only FabricPath vlans would be the interface SVI’s between the 4 nodes and the Firewall’s Outside vlan for connecting the N7K’s vrf: Global Routing table. This is probably overkill since these have to be licensed, but we have unique security requirements so cost tends to be a small issue.

For Routing, I would install a default route in the firewall to point to N7K vrf: Global which would run OSPF or EIGRP and learning routes to other lower security networks. For the High Secure Zone, I would install a vrf: Internal on all N5K’s and N7K’s and most likey would run BGP since MPLS at the N7K’s requires the use of MP-BGP. This would only learn routes for the SITE2 Internal Server Farm and the Internal Users (our applications need L3 between Sites to prevent split brain). I also have to take great care in not allowing vrf: Global from exchanging routes with vrf: Internal as this would create an assymetric nightmare with Stateful Firewalls providing L3 connection between the 2 vrf’s. A simple default route at the local Site N5K and Firewall and a summary route in the N7K pointing to the Internal Server Subnets will prevent that problem.
Due to my being new to Nexus gear and especially FabricPath, I am not certain I should implement in this fashion since Fabricpath is really meant for scaling L2 for VMotion and Server to Server traffic, etc.

Alternativly, I considered building another VDC off the N7K to provide FHRP and move the firewalls to the VDC. The N5K would only be using FabricPath and no L3 of any sort.

Being this is most likely not a typical design, I would appreciate any feedback on this.

q

Best Answer

Maybe I read it wrong, you allow users and internal servers in same security zone, all you need is users and internal servers in different layer 2 domain? Don't create vrf and routing between vrf just for that purpose. There must be simpler way to do it, for example different layer3 Vlans + ACL.

On 7K you give 1 vlan 100 for users and 1 vlan 200 for internal servers, on the users vlan interface, you can just add ACL to allow only where you want users to reach. It is possible to setup in my opinion, if you see anything in your environment does not support this, tell me and we can discuss.

If you want to run fabric path, you can use 4 5k-7k links to run your fabric path, you can add one more link just for trunk vlan 100 and 200 between 5k and 7K.