Functional Programming – Risks of Mutable Objects in C#

cfunctional programmingimmutabilityobject-orientedprogramming practices

I can see the benefits of mutable vs immutable objects like immutable objects take away lot of hard to troubleshoot issues in multi threaded programming due to shared and writeable state. On the contrary, mutable objects help to deal with identity of object rather than creating new copy every time and thus also improve performance and memory usage especially for larger objects.

One thing I am trying to understand is what can go wrong in having mutable objects in context of functional programming. Like one of points told to me is that the result of calling functions in different order is not deterministic.

I am looking for real concrete example where it is very apparent what can go wrong using mutable object in function programming. Basically if it is bad, it is bad irrespective of OO or functional programming paradigm, right ?

I believe below my own statement itself answers this question. But still I need some example so that I can feel it more naturally.

OO helps to manage dependency and write easier and maintainable
program with the aid of tools like encapsulation, polymorphism etc.

Functional programming also have same motive of promoting maintainable
code but by using style which eliminates the need for using OO tools
and techniques – one of which I believe is by minimizing side effects,
pure function etc.

Best Answer

I think the importance is best demonstrated by comparing to an OO approach

eg, say we have an object

Order
{
    string Status {get;set;}
    Purchase()
    {
        this.Status = "Purchased";
    }
}

In the OO paradigm the method is attached to the data, and it makes sense for that data to be mutated by the method.

var order = new Order();
order.Purchase();
Console.WriteLine(order.Status); // "Purchased"

In the Functional Paradigm we define a result in terms of the function. a purchased order IS the result of the purchase function applied to an order. This implies a few things which we need to be sure of

var order = new Order(); //this is a 'new order'
var purchasedOrder = purchase(order); // this is a 'purchased order'
Console.WriteLine(order.Status); // "New" order is still a 'new order'

Would you expect order.Status == "Purchased"?

It also implies that our functions are idempotent. ie. running them twice should produce the same result each time.

var order = new Order(); //new order
var purchasedOrder = purchase(order); //purchased order
var purchasedOrder2 = purchase(order); //another purchased order
var purchasedOrder = purchase(purchasedOrder); //error! cant purchase an order twice

If order was changed by the purchase function, purchasedOrder2 would fail.

By defining things as results of functions it allows us to use those results without actually calculating them. Which in programming terms is deferred execution.

This can be handy in of itself, but once we are unsure about when a function will actually happen AND we are fine about that, we can leverage parallel processing much more than we can in an OO paradigm.

We know that running a function wont affect the results of another function; so we can leave the computer to execute them in any order it chooses, using as many threads as it likes.

If a function mutates its input we have to be much more careful about such things.