It's not about Python.
But nevertheless, to answer you straightly. Lets say you have a function:
def doSomethingWithParmas(params):
params.doSomething()
Now you can call it later on with:
print doSomethingWithParams(myParams)
or/and
print doSomethingWithParams(yourParams)
It's how functions work, you can call it with whatever params you like, all that function cares, is that they are some kind of params that it can handle.
EDIT: A remark: I have never coded a line of Python in my life.
There are several implementations of Python, for example, CPython, IronPython, RPython, etc.
Some of them have a GIL, some don't. For example, CPython has the GIL:
From http://en.wikipedia.org/wiki/Global_Interpreter_Lock
Applications written in programming languages with a GIL can be designed to use separate processes to achieve full parallelism, as each process has its own interpreter and in turn has its own GIL.
Benefits of the GIL
- Increased speed of single-threaded programs.
- Easy integration of C libraries that usually are not thread-safe.
Why Python (CPython and others) uses the GIL
In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple native threads from executing Python bytecodes at once. This lock is necessary mainly because CPython's memory management is not thread-safe.
The GIL is controversial because it prevents multithreaded CPython programs from taking full advantage of multiprocessor systems in certain situations. Note that potentially blocking or long-running operations, such as I/O, image processing, and NumPy number crunching, happen outside the GIL. Therefore it is only in multithreaded programs that spend a lot of time inside the GIL, interpreting CPython bytecode, that the GIL becomes a bottleneck.
Python has a GIL as opposed to fine-grained locking for several reasons:
It is faster in the single-threaded case.
It is faster in the multi-threaded case for i/o bound programs.
It is faster in the multi-threaded case for cpu-bound programs that do their compute-intensive work in C libraries.
It makes C extensions easier to write: there will be no switch of Python threads except where you allow it to happen (i.e. between the Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS macros).
It makes wrapping C libraries easier. You don't have to worry about thread-safety. If the library is not thread-safe, you simply keep the GIL locked while you call it.
The GIL can be released by C extensions. Python's standard library releases the GIL around each blocking i/o call. Thus the GIL has no consequence for performance of i/o bound servers. You can thus create networking servers in Python using processes (fork), threads or asynchronous i/o, and the GIL will not get in your way.
Numerical libraries in C or Fortran can similarly be called with the GIL released. While your C extension is waiting for an FFT to complete, the interpreter will be executing other Python threads. A GIL is thus easier and faster than fine-grained locking in this case as well. This constitutes the bulk of numerical work. The NumPy extension releases the GIL whenever possible.
Threads are usually a bad way to write most server programs. If the load is low, forking is easier. If the load is high, asynchronous i/o and event-driven programming (e.g. using Python's Twisted framework) is better. The only excuse for using threads is the lack of os.fork on Windows.
The GIL is a problem if, and only if, you are doing CPU-intensive work in pure Python. Here you can get cleaner design using processes and message-passing (e.g. mpi4py). There is also a 'processing' module in Python cheese shop, that gives processes the same interface as threads (i.e. replace threading.Thread with processing.Process).
Threads can be used to maintain responsiveness of a GUI regardless of the GIL. If the GIL impairs your performance (cf. the discussion above), you can let your thread spawn a process and wait for it to finish.
Best Answer
When you are starting out, it doesn't matter which one you choose. What is more important is getting a better understanding of how to parallelize work. If you don't have that base understanding, you will not be able to take advantage of the fine point that differentiate between the two.
Pick one and get used to thinking about what work can be done in parallel. Think about ways you can break a large task into smaller pieces. Think about what pieces of memory all the tasks need access to and if that value ever changes. Think about if it is ok if each task had it's own local value to change and if combining all of the local values at the end could prevent contention.
Once you think you have a handle on those types of things, than you can come back and look at the differences again. With a stronger understanding of concurrency, you will have better understanding of how the two methods differ.