Multithreading – Understanding Context Switching Behavior

multithreading

I need to ask a question that have been bugging me for some time now:

If I have a single core and one OS thread, this thread will get 100% of the CPU time and all is good.

If I have a single core and two or more OS threads, they will share the CPU time using time slices.

So, is the time slices always the same amout of time no matter how many threads?

What I'm trying to get at, is the amout of work the CPU can do the same when I have two threads or 10000 threads?
I'm well aware that each individual thread will progress slower since they share a resource, but the actual amount of work the CPU can do, will it be the same?

e.g.

[T1 ] [T2 ] [T1 ] [T2 ] [T1 ] [T2 ] [T1 ] [T2 ] [T1 ] [T2 ]

[T1 ] [T2 ] [T3 ] [T4 ] [T5 ] [T6 ] [T1 ] [T2 ] [T3 ] [T4 ]

----time-------------------------------------------------->
img. 1

In the illustration above, there are 2 vs 6 threads, but the total amount of work would be the same.
Is this true?
Or are there something else that affects this when there are more threads, that cause each slice to be smaller or the context switch between the threads to be longer?

e.g.

[T1 ] [T2 ] [T1 ] [T2 ] [T1 ] [T2 ] [T1 ] [T2 ] [T1 ] [T2 ]

[T1 ]   [T2 ]    [T3 ]    [T4 ]    [T5 ]    [T6 ]    [T1 ] 

----time-------------------------------------------------->
img. 2

I'm not asking if it is a good idea to use 1000 threads…

[Edit]

Trying to clarify what I'm trying to understand:

Given x amount of time, e.g. 1 minute.
And given that the code does not use locks or any other thread interrupting code.

If I have two threads, there will be y% time spent on context switching.
If I have 1000 threads, will y be a greater number?, or will it be the same as in the previous case?

Best Answer

If your quantum (the time each thread gets to run) is the same, then you'll have the same number of context switches regardless of the number of threads (assuming no threads are blocked). The amount of time each thread gets will be affected, but the context switching overhead will be constant. This is in an idea world; obviously, in the real world, you'll have locks and resource contention, so many processes will be waiting on I/O or otherwise not ready, so there will be more context switching overhead, and it's possible that the amount of time spent switching can exceed the amount of time spent actually running user code. The amount of extra time is entirely dependent on your workload, which is why it's not possible to generally predict the amount of overhead.