is there a reason besides PEP 8
Multiple module interaction
Your code becomes much more troublesome when there are multiple modules there isn't a good way to parameterize the modules with the code you have. So you're left with 2 choices:
- Duplicated the custom import-exit in every module that needs to import modules (or at least the imports you want to customize)
- Centralize all your imports in one module and have subsequent modules import from the central module for use in their local namespace.
- Only use it in one place, but an import of a particular module may or may not come from your custom import-exit code. Any number of other imports may cause a different import to initialize the module in sys.modules.
There is a better way. If you want to customize imports Python has facilities for this. They are called import hooks and you can write a hook that does a lookup against a white (or black) list of modules whose import you want to customize. You can then have the hook exit explicitly when the import fails.
My understanding is that the other solution is not to use wildcards but to use the regular typing. In that case, if we change the method signature to (...) And now in the body of the method, not only we can read from list, we can also write into it. If we use the the wildcard version, we cannot write into that list any value other than null. So in this view, wildcards actually decrease flexibility, not increase.
The problem is that you're assuming the increased flexibility is for the person writing the function; it's for the person using the function. The more flexibility the implementation gives up, the more flexibility the caller gains. The restrictions on the implementation are guarantees for the caller.
Case in point, when you change pushAll(List<E>)
to pushAll(List<? extends E>)
, the implementer can no longer assume that he knows the precise type stored in the list, and that is why you can't safely write to the list. But because of that, the caller is now free to to pass any list whose elements are a subtype of E
, whereas before he could only pass lists of E
.
You claim that if you change pushAll
to <T extends E> void pushAll(List<T>)
you can write to the list, but that's not true in this case. You don't know what class T
is ahead of time, so you can't possibly create a T
to insert into the list. The only way you could possibly add a T
to the list is if the caller gave it to you. That's why Effective Java argues that you should replace type parameters with wildcards if they only appear once in the method; the main purpose of type parameters is to allow you to give an unknown type a name so you can refer to it more than once.
All in all, it looks to me that wildcards are actually used to decrease flexibility so that discourages people from writing into the object while <? super X>
discourages people from reading the object (try it and you will get only Object as type of the returned value). (...) We can change the signature to public <T extends E> void popAll(List<T> list)
. Now we can write or read as we want. There is no restriction.
You can't just swap super
for extends
. Let's say you have a Stack<Number>
. What's going to happen if you pass a List<Integer>
to popAll
and the stack tries to insert a Double
into the list? Likewise, suppose you pass a List<Object>
to pushAll
; what's going to happen if the list has a String
and you try to insert it into a stack of numbers? Bad things. That's why you use ? extends E
when you want to read from the collection and ? super E
when you plan on writing to it; it's the only safe way to do it. (There's a mnemonic for remembering which to use: PECS. Producer extends, consumer super.)
Besides, type parameters can't have lower bounds (you can't say T super SomeClass
) so you're forced to use a wildcard for super
.
Best Answer
It is not a bad idea, but it does have some consequences you should be aware of. It's a tradeoff.
Its simpler and shorter, and less programmer overhead typing crap. And its probably less likely to include stray includes that are not needed (though now modern IDEs detect/fix that for you so maybe that doesn't matter).
It CAN result in code that worked fine, when you upgrade the version of your libraries, suddenly stops compiling. But that is insanely unlikely (been doing this for 40 years and I've NEVER seen it happen).
Personally, I try to keep my includes minimal as a form of documentation. For library code (code that's highly leveraged) - its more important to really understand your dependencies. For application code, its a little less important.
No right or wrong - just go for what feels right, IMHO.