Let me give some guiding principles.
Principle #1. As outlined in http://docs.python.org/2/reference/simple_stmts.html the performance overhead of asserts can be removed with a command line option, while still being there for debugging. If performance is a problem, do that. Leave the asserts. (But don't do anything important in the asserts!)
Principle #2. If you're asserting something, and will have a fatal error, then use an assert. There is absolutely no value in doing something else. If someone later wants to change that, they can change your code or avoid that method call.
Principle #3. Do not disallow something just because you think it is a stupid thing to do. So what if your method allows strings through? If it works, it works.
Principle #4. Disallow things that are signs of likely mistakes. For instance consider being passed a dictionary of options. If that dictionary contains things that are not valid options, then that's a sign that someone didn't understand your API, or else had a typo. Blowing up on that is more likely to catch a typo than it is to stop someone from doing something reasonable.
Based on the first 2 principles, your second version can be thrown away. Which of the other two you prefer is a matter of taste. Which do you think more likely? That someone will pass a non-customer to add_customer
and things will break (in which case version 3 is preferred), or that someone will at some point want to replace your customer with a proxy object of some kind that responds to all of the right methods (in which case version 1 is preferred).
Personally I've seen both failure modes. I'd tend to go with version 1 out of the general principle that I'm lazy and it is less typing. (Also that kind of failure usually tends to show up sooner or later in a fairly obvious way. And when I want to use a proxy object, I get really annoyed at people who have tied my hands.) But there are programmers I respect who would go the other way.
It is a an acceptable form. As @Giorgio said, I would put the closure after the captured variable definition to ease the flow of reading.
The alternative form would be to define another function, taking a, b, c as parameters. That is 5 parameters which is a lot. The closure allows you avoid repeating yourself in a very simple way. This is a big win for your version.
You can use the timeit module to compare the performances of simple snippets. You should check yourself that a closure is not a heavy machinery
. The only problem I see is that it creates more nested elements. So if you find yourself writing a big closure, you should try to extract the complex part outside. But in this case I don't think it is an issue.
import timeit
import random
import string
def func1 (param1, param2):
def func2(foo, bar):
return "{0} {1} {2:0.2f} {3} {4} {0}".format('*'*a, b, c, foo, bar)
a = random.randrange(10)
b = ''.join(random.choice(string.letters) for i in xrange(10))
c = random.gauss(0, 1)
if param1:
func2(a*c, param1)
else:
if param2 > 0:
func2(param2, param2)
def func4(foo, bar, a, b, c):
return "{0} {1} {2:0.2f} {3} {4} {0}".format('*'*a, b, c, foo, bar)
def func3 (param1, param2):
a = random.randrange(10)
b = ''.join(random.choice(string.letters) for i in xrange(10))
c = random.gauss(0, 1)
if param1:
func4(a*c, param1, a, b, c)
else:
if param2 > 0:
func4(param2, param2, a, b, c)
print timeit.timeit('func1("tets", "")',
number=100000,
setup="from __main__ import func1")
print timeit.timeit('func3("tets", "")',
number=100000,
setup="from __main__ import func3")
Best Answer
I think the reason is implementation simplicity. Let me elaborate.
The default value of the function is an expression that you need to evaluate. In your case it is a simple expression that does not depend on the closure, but it can be something that contains free variables -
def ook(item, lst = something.defaultList())
. If you are to design Python, you will have a choice - do you evaluate it once when the function is defined or every time when the function is called. Python chooses the first (unlike Ruby, which goes with the second option).There are some benefits for this.
First, you get some speed and memory boosts. In most cases you will have immutable default arguments and Python can construct them just once, instead of on every function call. This saves (some) memory and time. Of course, it doesn't work quite well with mutable values, but you know how you can go around.
Another benefit is the simplicity. It's quite easy to understand how the expression is evaluated - it uses the lexical scope when the function is defined. If they went the other way, the lexical scope might change between the definition and the invocation and make it a bit harder to debug. Python goes a long way to be extremely straightforward in those cases.