Apologies if this question is stupid. I tried to find an answer online for quite some time, but couldn't and hence I'm asking here. I am learning threads, and I've been going through this link and this Linux Plumbers Conference 2013 videoabout kernel level and user level threads, and as far as I understood, using pthreads create threads in the userspace, and the kernel is not aware about this and view it as a single process only, unaware of how many threads are inside. In such a case,
- who decides the scheduling of these user threads during the timeslice the process gets, as the kernel sees it as a single process and is unaware of the threads, and how is the scheduling done?
- If pthreads create user level threads, how are kernel level or OS threads created from user space programs, if required?
- According to the above link, it says Operating Systems kernel provides system call to create and manage threads. So does a
clone()
system call creates a kernel level thread or user level thread?- If it creates a kernel level thread, then
strace
of a simple pthreads program also shows using clone() while executing, but then why would it be considered user level thread? - If it doesn't create a kernel level thread, then how are kernel threads created from userspace programs?
- If it creates a kernel level thread, then
- According to the link, it says "It require a full thread control block (TCB) for each thread to maintain information about threads. As a result there is significant overhead and increased in kernel complexity.", so in kernel level threads, only the heap is shared, and the rest all are individual to the thread?
Edit:
I was asking about the user-level thread creation, and it's scheduling because here, there is a reference to Many to One Model where many user level threads are mapped to one Kernel-level thread, and Thread management is done in user space by the thread library. I've been only seeing references to using pthreads, but unsure if it creates user-level or kernel-level threads.
Best Answer
This is prefaced by the top comments.
The documentation you're reading is generic [not linux specific] and a bit outdated. And, more to the point, it is using different terminology. That is, I believe, the source of the confusion. So, read on ...
What it calls a "user-level" thread is what I'm calling an [outdated] LWP thread. What it calls a "kernel-level" thread is what is called a native thread in linux. Under linux, what is called a "kernel" thread is something else altogether [See below].
This was how userspace threads were done prior to the
NPTL
(native posix threads library). This is also what SunOS/Solaris called anLWP
lightweight process.There was one process that multiplexed itself and created threads. IIRC, it was called the thread master process [or some such]. The kernel was not aware of this. The kernel didn't yet understand or provide support for threads.
But, because, these "lightweight" threads were switched by code in the userspace based thread master (aka "lightweight process scheduler") [just a special user program/process], they were very slow to switch context.
Also, before the advent of "native" threads, you might have 10 processes. Each process gets 10% of the CPU. If one of the processes was an LWP that had 10 threads, these threads had to share that 10% and, thus, got only 1% of the CPU each.
All this was replaced by the "native" threads that the kernel's scheduler is aware of. This changeover was done 10-15 years ago.
Now, with the above example, we have 20 threads/processes that each get 5% of the CPU. And, the context switch is much faster.
It is still possible to have an LWP system under a native thread, but, now, that is a design choice, rather than a necessity.
Further, LWP works great if each thread "cooperates". That is, each thread loop periodically makes an explicit call to a "context switch" function. It is voluntarily relinquishing the process slot so another LWP can run.
However, the pre-NPTL implementation in
glibc
also had to [forcibly] preempt LWP threads (i.e. implement timeslicing). I can't remember the exact mechanism used, but, here's an example. The thread master had to set an alarm, go to sleep, wake up and then send the active thread a signal. The signal handler would effect the context switch. This was messy, ugly, and somewhat unreliable.That is [technically] incorrect to call it a kernel thread.
pthread_create
creates a native thread. This is run in userspace and vies for timeslices on an equal footing with processes. Once created there is little difference between a thread and a process.The primary difference is that a process has its own unique address space. A thread, however, is a process that shares its address space with other process/threads that are part of the same thread group.
Kernel threads are not userspace threads, NPTL, native, or otherwise. They are created by the kernel via the
kernel_thread
function. They run as part of the kernel and are not associated with any userspace program/process/thread. They have full access to the machine. Devices, MMU, etc. Kernel threads run in the highest privilege level: ring 0. They also run in the kernel's address space and not the address space of any user process/thread.A userspace program/process may not create a kernel thread. Remember, it creates a native thread using
pthread_create
, which invokes theclone
syscall to do so.Threads are useful to do things, even for the kernel. So, it runs some of its code in various threads. You can see these threads by doing
ps ax
. Look and you'll seekthreadd, ksoftirqd, kworker, rcu_sched, rcu_bh, watchdog, migration
, etc. These are kernel threads and not programs/processes.UPDATE:
Remember that, as mentioned above, there are two "eras".
(1) Before the kernel got thread support (circa 2004?). This used the thread master (which, here, I'll call the LWP scheduler). The kernel just had the
fork
syscall.(2) All kernels after that which do understand threads. There is no thread master, but, we have
pthreads
and theclone
syscall. Now,fork
is implemented asclone
.clone
is similar tofork
but takes some arguments. Notably, aflags
argument and achild_stack
argument.More on this below ...
There is nothing "magic" about a processor stack. I'll confine discussion [mostly] to x86, but this would be applicable to any architecture, even those that don't even have a stack register (e.g. 1970's era IBM mainframes, such as the IBM System 370)
Under x86, the stack pointer is
%rsp
. The x86 haspush
andpop
instructions. We use these to save and restore things:push %rcx
and [later]pop %rcx
.But, suppose the x86 did not have
%rsp
orpush/pop
instructions? Could we still have a stack? Sure, by convention. We [as programmers] agree that (e.g.)%rbx
is the stack pointer.In that case, a "push" of
%rcx
would be [using AT&T assembler]:And, a "pop" of
%rcx
would be:To make it easier, I'm going to switch to C "pseudo code". Here are the above push/pop in pseudo code:
To create a thread, the LWP scheduler had to create a stack area using
malloc
. It then had to save this pointer in a per-thread struct, and then kick off the child LWP. The actual code is a bit tricky, assume we have an (e.g.)LWP_create
function that is similar topthread_create
:With a kernel that understands threads, we use
pthread_create
andclone
, but we still have to create the new thread's stack. The kernel does not create/assign a stack for a new thread. Theclone
syscall accepts achild_stack
argument. Thus,pthread_create
must allocate a stack for the new thread and pass that toclone
:Only a process or main thread is assigned its initial stack by the kernel, usually at a high memory address. So, if the process does not use threads, normally, it just uses that pre-assigned stack.
But, if a thread is created, either an LWP or a native one, the starting process/thread must pre-allocate the area for the proposed thread with
malloc
. Side note: Usingmalloc
is the normal way, but the thread creator could just have a large pool of global memory:char stack_area[MAXTASK][0x100000];
if it wished to do it that way.If we had an ordinary program that does not use threads [of any type], it may wish to "override" the default stack it has been given.
That process could decide to use
malloc
and the above assembler trickery to create a much larger stack if it were doing a hugely recursive function.See my answer here: What is the difference between user defined stack and built in stack in use of memory?