Cisco – 6807XL CPU process stuck at 25 / 30 %

ciscocpuvss

I've recently installed a new 6807XL Chassis in VSS in my datacenter.

After 3 weeks, I keep seeing the CPU usage around 25% (per second, per minute or per five minutes). Looking at the historical graphs I can see this has been the case since starting up the VSS.

When I try to see which process is consuming the most I can see slcp process at 4% – this is the highest. Next I get Spanning-tree at 2%…

I don't run much on the VSS, just static routes, a bit of OSPF, and the VSL link of course. I have around 10 000 clients connected on this VSS core which does the routing & switching. I also have 3000 lines of IP Access lists which all log matches & denies.

The IOS version is 15.1.2.SY1 (ipservices)

Should I be worried about this kind of percentage? To my knowledge, the correct values are usually down to 5%


Edit 1

My CPU Historical Graph:

> C6807XL-VSS#sh processes cpu history

    2222222222222222222222222222222222222333332222111111111122
    2244444444441111144444111118888833333111110000999996666666
100
 90
 80
 70
 60
 50
 40
 30                            *****     *****              **
 20 **********************************************************
 10 **********************************************************
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5
               CPU% per second (last 60 seconds)

    3422233233322849923322333224333573266433373354325244247744
    1098420831256725241569234573112169569008255000763746619972
100                *
 90              * **
 80              * **               *        *            **
 70              * **               *  **    *            **
 60              * **               *  **    *            **
 50              * **              **  **    *  *   *  *  *#*
 40  *           ****  *       *   *#* *** * #* *** * ** *##**
 30 **** ********#*## *********#**####*###*#*##*###***#***###*
 20 ##########################################################
 10 ##########################################################
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5
               CPU% per minute (last 60 minutes)
              * = maximum CPU%   # = average CPU%

    7544443444498577696877374645464538498648877789864645674549987888787878
    9132939110527897235108828159076475618796321950899129812884364035733681
100
 90            **    *               * **  *    ***          ***   *   *
 80 *          ** ** * * *           * **  **  ****          *** ***** ***
 70 *          ** ** ***** *     *   * *** *********    **   *************
 60 *          *********** * * * *   * *** ********* * *** * *************
 50 **  *     ************ ***** *** ***************** *** ***************
 40 **********************************************************************
 30 #**************************************#****####****************##***#
 20 ######################################################################
 10 ######################################################################
   0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
             0    5    0    5    0    5    0    5    0    5    0    5    0
                   CPU% per hour (last 72 hours)
                  * = maximum CPU%   # = average CPU%

Edit 2

CPU consumption:

Peak:

CPU utilization for five seconds: 48%/29%; one minute: 27%; five minutes: 24%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
 302    37851048   3231408      11713  3.43%  2.23%  2.10%   0 SEA write CF pro
 685    37324288  36021905       1036  2.79%  2.22%  2.11%   0 Spanning Tree
  16    11694012 124309950         94  1.67%  0.97%  0.92%   0 ARP Input
 171    25299104   2761699       9160  1.59%  1.34%  1.35%   0 OIR Process
 188    71498212 166026936        430  1.43%  3.87%  3.97%   0 slcp process
  74     2099116    167699      12517  1.43%  0.17%  0.11%   0 Per-minute Jobs
 679     9251792 125616676         73  1.11%  0.90%  0.89%   0 IP Input
 937     7254584  80249801         90  0.87%  0.44%  0.42%   0 Port manager per
 922      453432     99151       4573  0.87%  0.07%  0.01%   0 CM hw consistenc
 699     3728624   5213752        715  0.63%  0.40%  0.37%   0 mDNS
 <...output truncated...>

Low:

CPU utilization for five seconds: 29%/9%; one minute: 26%; five minutes: 24%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process
 188    71499092 166028826        430  5.51%  3.94%  3.98%   0 slcp process
 302    37851672   3231454      11713  2.63%  2.22%  2.11%   0 SEA write CF pro
 685    37324828  36022200       1036  2.39%  2.30%  2.14%   0 Spanning Tree
 171    25299392   2761729       9160  1.59%  1.35%  1.35%   0 OIR Process
 679     9251960 125618970         73  1.19%  0.87%  0.89%   0 IP Input
  16    11694192 124312162         94  1.11%  0.94%  0.91%   0 ARP Input
 907     8811320   2294505       3840  0.95%  0.48%  0.47%   0 Env Poll
 937     7254672  80250797         90  0.63%  0.43%  0.42%   0 Port manager per
 699     3728724   5213875        715  0.47%  0.42%  0.38%   0 mDNS
 744     7597964   8291508        916  0.39%  0.44%  0.42%   0 XDR receive
 <...output truncated...>

Best Answer

What you have experienced is most likely one of the bugs below. We've had exactly the same problems and had it all fixed when upgrading to 15.1(2)SY7. It's a copy/paste from Cisco bug report:

We had the problem with the first bug, since we used 10G SFP's in the Supervisor to create the VSS VSL. The second bug was also pretty common and i've heard a lot of people have problem with it.


  1. Cisco Bug: CSCur96442 - C6800: High CPU due to "slcp process" when 10G ports have 10G SFPs

    Last Modified

    Oct 01, 2018

    Products (1)

    Cisco Catalyst 6000 Series Switches Known Affected Releases 15.1(2)SY5.1

    Description (partial) Symptom:

    Catalyst 6800 platform reported elevated usage of CPU due to "slcp process". CPU utilization vary based on number of 10G ports used. More the number of ports used, more the CPU usage seen. Example:

    6800-A#show processes cpu sorted | inc slcp process
     108       50400     89664        562  9.43%  8.13%  4.72%   0 slcp process
    

    Conditions:

    This defect is applicable only when the 10G ports use 10G transceivers. E.g., SFP-10G-SR, SFP-10G-LR, SFP-10G-LRM

    High CPU is NOT seen when the 10G ports are admin down.


  1. Cat6800: High CPU usage due to "slcp process" when GLC-T plugged in

    Description Symptom:

    Catalyst 6800 platform reported elevated usage of CPU due to "slcp process". CPU utilization vary based on number of 1G ports used. More the number of ports used, more the CPU usage seen.

    Example:

    6800-A#show processes cpu sorted | inc slcp process
    108       50400     89664        562  39.38%  34.34%  38.27%   0 slcp process
    

    Conditions:

    This defect is applicable only when the 1G ports use GLC-T transceivers

    Workaround:

    If GLC-Ts are not used shutdown the interface to reduce the CPU load.

    Further Problem Description:

    Fix introduced to address this issue optimizes the CPU cycles used to poll the transceivers and get data/stats. It essence, the fix reduces the CPU usage approximately by 50% and will not bring down the CPU usage to 0.

    For similar issue reported for 10G SFP, please refer CSCur96442