Duncan Epping knows his VMware networking quite well and the scenario he describes is particularly nasty but it's little bit unusual (four nics aggregated into two separate Etherchannel groups). His analysis is right though - VMware doesn't support link aggregation in the way that set up required.
Port aggregation does not improve single session bandwidth, it makes it easier to get better overall utilization of available links. Your four links wont ever be used to provide a single session from a server with 4Gbps of potential bandwidth for example, individual sessions are still traverse a single nic on the VMware Host (or any other system for that matter) and traverse your switches over single point to point connections. However if you choose a load balancing algorithm then separate sessions will be distributed across the available links providing you with better overall performance. With VMware you can choose various teaming policies (failover only, route by source port hash and route by source\destination IP hash) and unless its changed recently it only supports static trunking not active LACP. Load balancing will only work on correctly configured switches so if you want to use it then you will have to do some sort of port trunking\Etherchannel configuration on your switches. This VMware KB article explains some of the background and gives a Cisco and HP configuration example.
The drawback is that if you want to distribute your nics across separate switches and use IP hashing to load balance then they must be stacked in some fashion, otherwise you will end up with a problem similar to the one Duncan described. This has some obvious risks in terms of the potential for issues with that stack impacting all NICs at the same time. The fact that VMware still do not support LACP fully for vSwitches makes this a lot harder than it should be.
Pardon, in your cut and pasted configs, you appear to be describing Gi0/48 - your uplink to your router, but in your question refer specifically to hosts connected to Gi0/18. I'm going to assume you're describing two different ports here. Further, I'm assuming from details in your config statements and question, that vlan 3 is being used for the 192.168.0.80/28 traffic. I'm going to assume that the vlan has already been declared on your 3560. (Check sh vlan)
First of all, your port Gi0/18 should be configured for access mode on vlan 3. Likely, something like this:
interface GigabitEthernet 0/18
switchport access vlan 3
switchport mode access
As far as for other recommendations. Will all/most of your traffic from your IP subnets be to and from the internet. Basically, If you have enough traffic between subnets, it may suit you to have the 3560 act as your internal router and then dedicate your 3825 to be your border router. The problem is that if your router is baring the entire load for all routing, then a packet from one subnet will arrive at your switch, then be forwarded via the dot1q to your trunk on some vlan X, the router then makes a routing decision and sends the same packet back along the dot1q trunk on some new vlan Y now destined for the destination machine. Btw, I'm simply describing the situation of internal traffic to your customers/organization that crosses your different subnets.
Instead, you can configure the 3560 at the, assuming normal conventions, first address of each vlan/subnet. E.g. 192.168.0.81 and enable ip routing. The next step is you create a new subnet specifically for between the router and switch. For convenience, i'd use something completely different, for example, 192.0.2.0/24 is reserved for documentation examples. Configure the router at 192.0.2.1 and the switch at 192.0.2.2. Have the switch use 192.0.2.1 as the default route. Configure the router to reach 192.168.0.0/16 via the switch at 192.0.2.2. If your network is small enough, static routes should be sufficient. No need for OSPF or anything.
Of course, this would be a rather dramatic change; but it has potential for being a large improvement. It all depends on the nature of your traffic.
For reference, cisco lists the Cisco Catalyst 3560G-48TS and Catalyst 3560G-48PS having a 38.7 Mpps forwarding rate and the Cisco 3825 as having 0.35Mpps forwarding rate. Mpps, just in case you don't know, is millions of packets per second.
It's not bandwidth, but it's how many 64 byte packet routing decisions the device can make a second. The length of the packet doesn't affect how long it takes to making a routing decision. So the peak performance in bits/bytes will be somewhere in a range. In terms of bandwidth, it means that 350kpps is 180Mbps w/ 64byte packets and 4.2Gbps w/ 1500 byte packets. Mind you, that's in bits per second, so think of it as 18 Megabytes or 420 Megabytes per second in regular file-size terms.
In theory, this means that your 3560G can route somewhere between 19.8Gbps and 464Gbps or rougly 2GBps and 45GBps.
Actually, looking at those numbers, you most definitely should consider the plan I described above. Dedicate your 3825 to handling, presumably, NAT'd external traffic and let your 3560 handle the rest.
I'm sorry this is so long; I'm bored at work waiting for tapes to finish.
Cheers.
Best Answer
You will need to implement some method of InterVLAN Routing to route between the different vlans in your scenario. However, due to the utilization of 2960 Switches you won't have the capability of implementing "ip routing" and configuration of SVIs for a Default Gateway for each subnet. One solution is to implement "Router on a Stick" utilizing a separate Layer 3 Router or Switch to handle the IP Routing and SVIs for each vlan/subnet. The traffic will then be trunked out of the 2960 to the MLS Switch or Router and will be returned over the same trunk port. This isn't the best method has it greatly reduces your throughput due the single interface. You will need to determine wether the amount of throughput is acceptable for your design. If not, I would recommend you upgrade to the 3560G or 3750G switches which would be capable of routing your Layer 3 traffic between subnets.
Also, in your VMWare ESX Configuration, is the Service Console dedicated to a specific eth interface, or do you have it connected to a VSwitch being tagged with a specific vlan id? You'll want to make sure that if you are trunking to that port that you configure the tagging of the Service Console to the requested vlan id, or you implement "switchport trunk native vlan xx" with xx being the vlan you wish the Service Console traffic to be in.