- My first question has to do with the links between Core and
aggregation. I could use MC-LAG or pure OSPF. With MC-LAG I can have
all links on the same vlan and its very easy to have other vlans
span the whole network, but on the other hand, it feels to me that
ptp-links and OSPF eclb is more robust and scalable.
Go for ospf L3 links, better stability, and scales better, easier to monitor and trouble shoot.
- The machines on private IP-adresses still need internet, so I want
to NAT them out. But our new core switches doesn't support NAT (as
far as I know) so we are setting up a NAT-router on the side. Is
there some smart way in OSPF to redirect traffic from privates IPs
to the nat-router? Or should I use PBR?
Opt1: PBR - is messy, and hard to maintain, single nat device is single point of failure.
Opt2: central NAT on outgoing university router
Opt3: get two routers/firewalls/devices that can sit between your core switch and the University network that can do the NAT for you
- Do you have any solution for the public IPs? They are sort of
scattered throughout the datacenter so there is not a single access
point for these machines. It would be very helpful to have a large
vlan with a /24 or so that spanned the whole network, see question
With the L3 recommended option, there can be smaller public IP vlan/subnets in each area.
take your public /22 and subnet with /28(64net x 14hosts) or /26(16Net x 64hosts) each, then have a vlan in each cluster for public IP machines. Also easier to filter/protect in future
But maybe the best way is to just split up the network into smaller ones and route them with OSPF as needed.
Agreed
- Some of you say its a bad idea to have both public and private IPs
in the same OSPF area. If this is so, what do you think we should do
instead?
This comes from a corporate mindset where security is high on the priority list if not on top.
In corporates NO external access is allowed to internal machines, all public facing/accessible machines are in a DMZ behind FW controlling inside and outside access.
Access to internal machines would be provided to staff through a 2 factor authenticated VPN link in most cases, no need for public IP's on internal machines.
If you opt for the recommended NAT capable devices between core and rest of Campus network, you can do one-to-one nat there as required and all internal hosts can have private ip's. This also provides the possibility for better security as only specific services can be allowed inbound to hosts.
If the clasification isn't available on the switch model, then you need to look at something like ACL-based QoS classification (Layers 2, 3, and 4) so that you can mark what traffic, by IP address range for example and tell the hardware how you want to prioritise the traffic with mappings.
CoS is for traffic that is layer 2 only. If you need to prioritise traffic that is going to get routed, then you would use DSCP which is layer 3.
So CoS will put the marking on an Ethernet frame header and DSCP would do it within the space on a IP header. If the traffic is getting routed, the Ethernet frame get's stripped so you would loose your CoS marking and would rely on DSCP within the IP header.
You achieve more granuality with DSCP compared to CoS as you can set more options because of the difference in bit size since CoS uses only 3 bits to set markings while DSCP uses 6 bits.
All in all, I would advise you seperate your ISCSI network from your core infrastrucure and then you wouldn't even need to worry about QoS.
Best Answer
In a word: deep buffers, preferably per-port buffers
iSCSI does not tolerate frame drops. Delays are sort of ok, but a switch dropping a frame will create all manner of issues.
(Finding buffer size -- and type -- in manufacturer specs can be a challenge. They aren't always provided, and what is provided isn't always true.)