I would use vlan-based policing which works better on these switches. This is an example matching a speed value of 48Mb
mls qos
!
interface GigabitEthernet1/0/2
switchport access vlan 500
switchport mode access
mls qos vlan-based
!
class-map match-all CUSTOMER_1
match input-interface GigabitEthernet1/0/2
!
policy-map VLAN500_POLICE
class CUSTOMER_1
police 48000000 18000000 exceed-action drop
!
policy-map VLAN500_PARENT
class class-default
set dscp default
service-policy VLAN500_POLICE
!
interface Vlan500
service-policy input VLAN500_PARENT
Under the parent policy you have to 'set' something in order for it to work. This could be anything so in this example I'm simply setting the dscp to 0
Your question's pretty broad. There's a lot of different commands you can use to troubleshoot and monitor QoS, so I'll focus on the primary question you have, which is how to reasonably verify your QoS configuration is working and how to read the policy-map interface output.
The only true way to verify that QoS is working is to hook up a traffic generator and monitor your drop rate in various queues. Since that isn't typically feasible, particularly in a production environment, all you can really do is verify that the traffic is being marked and classified properly.
What you're really looking for, when it comes to verifying if your QoS configuration is working, is for the counters in the policy-map interface command to increment.
So, for example, in the output your provided:
Class-map: VOICE (match-any)
3860628 packets, 1070196895 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: protocol sip
97348 packets, 49867304 bytes
5 minute rate 0 bps
Match: protocol rtp
3763280 packets, 1020329591 bytes
5 minute rate 0 bps
Match: access-group name NEC-PBX
0 packets, 0 bytes
5 minute rate 0 bps
Priority: 40% (340 kbps), burst bytes 8500, b/w exceed drops: 5
You can see that you're seeing packets under SIP and RTP, but not NEC-PBX. If you know you're getting SIP and RTP traffic across a link, you should see the packet counts increment and that's a reasonable way to know that your configuration is basically working.
Best Answer
There are cases where the given platform cannot shape at all or not in the required direction. On the (Cisco) platforms I have come across, ingress QoS is usually able to police, but not shape, while egress QoS can queue/shape and/or police.
Shaping/Queuing requires buffer memory per port (which can be a very limited ressource on some platforms), and can lead to delay and jitter as soon as those (egress) buffers start to fill up.
There are cases where varying and volatile RTTs (read: jitter) hurt the application more than a few lost packets. Also, not all TCP congestion avoidance algorithms are equal - some only consider packets lost, others take the RTT/jitter into account.
I found https://blog.ipspace.net/2016/09/policing-or-shaping-it-depends.html and http://packetlife.net/blog/2008/jul/30/policing-versus-shaping/ to show the differences pretty clearly.
In short: Policing is needed in two cases: