Python – Working Through the Single Responsibility Principle (SRP) When Calls Are Expensive

methodsperformancepythonsingle-responsibility

Some base points:

  • Python method calls are "expensive" due to its interpreted nature. In theory, if your code is simple enough, breaking down Python code has negative impact besides readability and reuse (which is a big gain for developers, not so much for users).
  • The single responsibility principle (SRP) keeps code readable, is easier to test and maintain.
  • The project has a special kind of background where we want readable code, tests, and time performance.

For instance, code like this which invokes several methods (x4) is slower than the following one which is just one.

from operator import add

class Vector:
    def __init__(self,list_of_3):
        self.coordinates = list_of_3

    def move(self,movement):
        self.coordinates = list( map(add, self.coordinates, movement))
        return self.coordinates

    def revert(self):
        self.coordinates = self.coordinates[::-1]
        return self.coordinates

    def get_coordinates(self):
        return self.coordinates

## Operation with one vector
vec3 = Vector([1,2,3])
vec3.move([1,1,1])
vec3.revert()
vec3.get_coordinates()

In comparison to this:

from operator import add

def move_and_revert_and_return(vector,movement):
    return list( map(add, vector, movement) )[::-1]

move_and_revert_and_return([1,2,3],[1,1,1])

If I am to parallelize something such as that, it is pretty objective I lose performance. Mind that is just an example; my project has several mini routines with math such as that – While it is much easier to work with, our profilers are disliking it.


How and where do we embrace the SRP without compromising performance in Python, as its inherent implementation directly impacts it?

Are there workarounds, like some sort of pre-processor that puts things in-line for release?

Or is Python simply poor at handling code breakdown altogether?

Best Answer

is Python simply poor at handling code breakdown altogether?

Unfortunately yes, Python is slow and there are many anecdotes about people drastically increasing performance by inlining functions and making their code ugly.

There is a work around, Cython, which is a compiled version of Python and much faster.

--Edit I just wanted to address some of the comments and other answers. Although the thrust of them isnt perhaps python specific. but more general optimisation.

  1. Don't optimise untill you have a problem and then look for bottlenecks

    Generally good advice. But the assumption is that 'normal' code is usually performant. This isn't always the case. Individual languages and frameworks each have their own idiosyncracies. In this case function calls.

  2. Its only a few milliseconds, other things will be slower

    If you are running your code on a powerful desktop computer you probably don't care as long as your single user code executes in a few seconds.

    But business code tends to run for multiple users and require more than one machine to support the load. If your code runs twice as fast it means you can have twice the number of users or half the number of machines.

    If you own your machines and data centre then you generally have a big chunk of overhead in CPU power. If your code runs a bit slow, you can absorb it, at least until you need to buy a second machine.

    In these days of cloud computing where you only use exactly the compute power you require and no more, there is a direct cost for non performant code.

    Improving performance can drastically cut the main expense for a cloud based business and performance really should be front and centre.

Related Topic