Ethernet – Monitoring best practice for thresholding errors on an interface

best practicesethernetmonitoring

When monitoring interface errors, what percentage of traffic should you set your 'critical' threshold to according to best practices and does it depend on the interface type (T1, Ethernet etc)? It would be a huge bonus if you can explain the justification for the particular percentage. I've found a few thread comments on various sites that mention 1%, but with no real justification.

Best Answer

Ethernet standard officially allows 10^-12 bit-error-rate, while in practice the hardware meet much better BER than which standard demands.

You should also be able to bing for 'SQA' (Service Quality Assurance) or 'SLA' (Service Level Agreement), some companies publish them, you could use them to check what your competitors are offering and offer something to that level.

Our SQA states to customers that 0.02% is minor fault (we will fix if ticket is opened), which I think is quite large packet loss for fibre connection, but same SQA covers also DSL so we didn't want to be too aggressive with it. So far this has been sufficient to customers, but we are prepared to reduce the number if it is hurting sales.

There are several bingable tools online, where you can check how much packet loss hurts TCP, which can be useful information when deciding what is acceptable loss for your application/product:

Related Topic