Up to Python 2.1, old-style classes were the only flavour available to the user.
The concept of (old-style) class is unrelated to the concept of type:
if x
is an instance of an old-style class, then x.__class__
designates the class of x
, but type(x)
is always <type
'instance'>
.
This reflects the fact that all old-style instances, independently of
their class, are implemented with a single built-in type, called
instance.
New-style classes were introduced in Python 2.2 to unify the concepts of class and type.
A new-style class is simply a user-defined type, no more, no less.
If x is an instance of a new-style class, then type(x)
is typically
the same as x.__class__
(although this is not guaranteed – a
new-style class instance is permitted to override the value returned
for x.__class__
).
The major motivation for introducing new-style classes is to provide a unified object model with a full meta-model.
It also has a number of immediate benefits, like the ability to
subclass most built-in types, or the introduction of "descriptors",
which enable computed properties.
For compatibility reasons, classes are still old-style by default.
New-style classes are created by specifying another new-style class
(i.e. a type) as a parent class, or the "top-level type" object if no
other parent is needed.
The behaviour of new-style classes differs from that of old-style
classes in a number of important details in addition to what type
returns.
Some of these changes are fundamental to the new object model, like
the way special methods are invoked. Others are "fixes" that could not
be implemented before for compatibility concerns, like the method
resolution order in case of multiple inheritance.
Python 3 only has new-style classes.
No matter if you subclass from object
or not, classes are new-style
in Python 3.
Best Answer
Let's get one thing out of the way first. The explanation that
yield from g
is equivalent tofor v in g: yield v
does not even begin to do justice to whatyield from
is all about. Because, let's face it, if allyield from
does is expand thefor
loop, then it does not warrant addingyield from
to the language and preclude a whole bunch of new features from being implemented in Python 2.x.What
yield from
does is it establishes a transparent bidirectional connection between the caller and the sub-generator:The connection is "transparent" in the sense that it will propagate everything correctly too, not just the elements being generated (e.g. exceptions are propagated).
The connection is "bidirectional" in the sense that data can be both sent from and to a generator.
(If we were talking about TCP,
yield from g
might mean "now temporarily disconnect my client's socket and reconnect it to this other server socket".)BTW, if you are not sure what sending data to a generator even means, you need to drop everything and read about coroutines first—they're very useful (contrast them with subroutines), but unfortunately lesser-known in Python. Dave Beazley's Curious Course on Coroutines is an excellent start. Read slides 24-33 for a quick primer.
Reading data from a generator using yield from
Instead of manually iterating over
reader()
, we can justyield from
it.That works, and we eliminated one line of code. And probably the intent is a little bit clearer (or not). But nothing life changing.
Sending data to a generator (coroutine) using yield from - Part 1
Now let's do something more interesting. Let's create a coroutine called
writer
that accepts data sent to it and writes to a socket, fd, etc.Now the question is, how should the wrapper function handle sending data to the writer, so that any data that is sent to the wrapper is transparently sent to the
writer()
?The wrapper needs to accept the data that is sent to it (obviously) and should also handle the
StopIteration
when the for loop is exhausted. Evidently just doingfor x in coro: yield x
won't do. Here is a version that works.Or, we could do this.
That saves 6 lines of code, make it much much more readable and it just works. Magic!
Sending data to a generator yield from - Part 2 - Exception handling
Let's make it more complicated. What if our writer needs to handle exceptions? Let's say the
writer
handles aSpamException
and it prints***
if it encounters one.What if we don't change
writer_wrapper
? Does it work? Let's tryUm, it's not working because
x = (yield)
just raises the exception and everything comes to a crashing halt. Let's make it work, but manually handling exceptions and sending them or throwing them into the sub-generator (writer
)This works.
But so does this!
The
yield from
transparently handles sending the values or throwing values into the sub-generator.This still does not cover all the corner cases though. What happens if the outer generator is closed? What about the case when the sub-generator returns a value (yes, in Python 3.3+, generators can return values), how should the return value be propagated? That
yield from
transparently handles all the corner cases is really impressive.yield from
just magically works and handles all those cases.I personally feel
yield from
is a poor keyword choice because it does not make the two-way nature apparent. There were other keywords proposed (likedelegate
but were rejected because adding a new keyword to the language is much more difficult than combining existing ones.In summary, it's best to think of
yield from
as atransparent two way channel
between the caller and the sub-generator.References: