The short answer is no. Your queuing policy needs to be
applied outbound at the point of bottleneck. In this case, the bottleneck is
the WAN connection (20 Mbps) between the pfSense and the 1841. Thus, the
proper location of your queuing policies are outbound on the LAN interface (a bit of a misnomer) of the pfSense and outbound on Fa0/01
of the 1841, which you have.
All that being said, your policy on the 1841 is flawed. In your example, you've reserved 1 Mbps of bandwidth for VoIP and applied the policy to a 100 Mbps interface. What happens when the PCs start pushing 70Mbps of traffic while trying to make a VoIP call? From a queuing standpoint on the 1841, the answer is nothing. There is only 70 Mbps of data flowing out Fa0/1. Therefore, there is always 1 Mbps available for VoIP. However, once
this traffic reaches the pfSense, at least 50 Mbps of data and VoIP will be dropped. Your calls would be terrible at best.
Without getting into all of the details, and using your
prior example, the policy should be something like this:
class-map match-any CM-VOICE-TRAFFIC
match access-group 145
!
policy-map PM-PRIORITISE-VOICE-child
class CM-VOICE-TRAFFIC
set ip dscp ef
priority 1000
class class-default
fair-queue
!
policy-map Shape-20Mb-parent
class class-default
shape average 20000000
service-policy PM-PRIORITISE-VOICE-child
!
interface FastEthernet0/1
service-policy output Shape-20Mb-parent
!
access-list 145 permit ip 10.0.59.0 0.0.0.255 any
This policy creates an artificial bandwidth limit on all traffic leaving the Fa0/1 on the 1841. Thus, the pfSense will never have the opportunity to drop traffic indiscriminately. Also, instead of the bandwidth command, the priority command should be used to limit latency and jitter. The tagging and the ACL change should be self-explanatory.
Unfortunately, I don't think there's a good way to do what you want without the provider's involvement. Working within that restriction, your best bet may be to implement an outbound policy on your LAN interface. For example, if you are PATing the business network and guest networks separately, so that the return traffic can be identified by destination IP, then a hierarchical policy that guarantees bandwidth to the business network will help. Essentially, the parent policy would shape all traffic to 10 Mbps. The parent would then call a child policy that guarantees 7 Mbps for the business network. The remaining traffic (guest network) would then be able to use whatever is left over.
Keep in mind that this is imperfect, since the traffic has already traversed the WAN. However, if the guest traffic is TCP, and starts getting dropped by your outbound LAN policy, the TCP session should throttle itself. This won't work for UDP at the transport layer.
A sample policy would look something like this:
ip access-list extended BUSINESS-NETWORK
permit ip any host 1.1.1.1
!
class-map BUSINESS-NETWORK
match access-group name BUSINESS-NETWORK
!
policy-map PARENT
class class-default
shape average 10000000
service-policy CHILD
!
policy-map CHILD
class BUSINESS-NETWORK
bandwidth 7000
!
interface Fa0/0
description LAN interface
ip address x.x.x.x
service-policy output PARENT
This is an imperfect example, but is the best I can come up with without provider involvement.
Best Answer
What you describe would be something like this:
This would match traffic going to 192.0.2/24 and shape it to 1Mbps. However I don't think this is necessarily what you want, what if there is no other demand to the circuit, wouldn't you want print job to get full capacity at that time?
Maybe classify traffic in 3 classes, like
Configuration could be something like:
Now in LAN ingress we match on traffic and give it internal qos-group 5, 3, 0, these numbers are insignificant they could be anything, it's just way to differentiate the traffic without mangling the existing CoS/PREC/DSCP bits.
After we've marked the traffic in LAN ingress, on WAN egress we match on the earlier defined qos-groups and treat traffic differently.
Here we give Important traffic 80% low-latency privilege to the capacity. For Normal traffic we give 20% contract, so if Important traffic sends 100% and you start to send Normal traffic, 20% of Important traffic would be dropped in favor of letting some Normal traffic pass. We give no contractual capacity to Scavanger class, it'll only send if either Important or Normal class are using less than contractual capacity.