The blog post you quoted overstates its claim a bit. FP doesn't eliminate the need for design patterns. The term "design patterns" just isn't widely used to describe the same thing in FP languages. But they exist. Functional languages have plenty of best practice rules of the form "when you encounter problem X, use code that looks like Y", which is basically what a design pattern is.
However, it's correct that most OOP-specific design patterns are pretty much irrelevant in functional languages.
I don't think it should be particularly controversial to say that design patterns in general only exist to patch up shortcomings in the language.
And if another language can solve the same problem trivially, that other language won't have need of a design pattern for it. Users of that language may not even be aware that the problem exists, because, well, it's not a problem in that language.
Here is what the Gang of Four has to say about this issue:
The choice of programming language is important because it influences one's point of view. Our patterns assume Smalltalk/C++-level language features, and that choice determines what can and cannot be implemented easily. If we assumed procedural languages, we might have included design patterns called "Inheritance", "Encapsulation," and "Polymorphism". Similarly, some of our patterns are supported directly by the less common object-oriented languages. CLOS has multi-methods, for example, which lessen the need for a pattern such as Visitor. In fact, there are enough differences between Smalltalk and C++ to mean that some patterns can be expressed more easily in one language than the other. (See Iterator for example.)
(The above is a quote from the Introduction to the Design Patterns book, page 4, paragraph 3)
The main features of functional
programming include functions as
first-class values, currying,
immutable values, etc. It doesn't seem
obvious to me that OO design patterns
are approximating any of those
features.
What is the command pattern, if not an approximation of first-class functions? :)
In an FP language, you'd simply pass a function as the argument to another function.
In an OOP language, you have to wrap up the function in a class, which you can instantiate and then pass that object to the other function. The effect is the same, but in OOP it's called a design pattern, and it takes a whole lot more code.
And what is the abstract factory pattern, if not currying? Pass parameters to a function a bit at a time, to configure what kind of value it spits out when you finally call it.
So yes, several GoF design patterns are rendered redundant in FP languages, because more powerful and easier to use alternatives exist.
But of course there are still design patterns which are not solved by FP languages. What is the FP equivalent of a singleton? (Disregarding for a moment that singletons are generally a terrible pattern to use.)
And it works both ways too. As I said, FP has its design patterns too; people just don't usually think of them as such.
But you may have run across monads. What are they, if not a design pattern for "dealing with global state"? That's a problem that's so simple in OOP languages that no equivalent design pattern exists there.
We don't need a design pattern for "increment a static variable", or "read from that socket", because it's just what you do.
Saying a monad is a design pattern is as absurd as saying the Integers with their usual operations and zero element is a design pattern. No, a monad is a mathematical pattern, not a design pattern.
In (pure) functional languages, side effects and mutable state are impossible, unless you work around it with the monad "design pattern", or any of the other methods for allowing the same thing.
Additionally, in functional languages
which support OOP (such as F# and
OCaml), it seems obvious to me that
programmers using these languages
would use the same design patterns
found available to every other OOP
language. In fact, right now I use F#
and OCaml everyday, and there are no
striking differences between the
patterns I use in these languages vs
the patterns I use when I write in
Java.
Perhaps because you're still thinking imperatively? A lot of people, after dealing with imperative languages all their lives, have a hard time giving up on that habit when they try a functional language. (I've seen some pretty funny attempts at F#, where literally every function was just a string of 'let' statements, basically as if you'd taken a C program, and replaced all semicolons with 'let'. :))
But another possibility might be that you just haven't realized that you're solving problems trivially which would require design patterns in an OOP language.
When you use currying, or pass a function as an argument to another, stop and think about how you'd do that in an OOP language.
Is there any truth to the claim that
functional programming eliminates the
need for OOP design patterns?
Yep. :)
When you work in a FP language, you no longer need the OOP-specific design patterns. But you still need some general design patterns, like MVC or other non-OOP specific stuff, and you need a couple of new FP-specific "design patterns" instead. All languages have their shortcomings, and design patterns are usually how we work around them.
Anyway, you may find it interesting to try your hand at "cleaner" FP languages, like ML (my personal favorite, at least for learning purposes), or Haskell, where you don't have the OOP crutch to fall back on when you're faced with something new.
As expected, a few people objected to my definition of design patterns as "patching up shortcomings in a language", so here's my justification:
As already said, most design patterns are specific to one programming paradigm, or sometimes even one specific language. Often, they solve problems that only exist in that paradigm (see monads for FP, or abstract factories for OOP).
Why doesn't the abstract factory pattern exist in FP? Because the problem it tries to solve does not exist there.
So, if a problem exists in OOP languages, which does not exist in FP languages, then clearly that is a shortcoming of OOP languages. The problem can be solved, but your language does not do so, but requires a bunch of boilerplate code from you to work around it. Ideally, we'd like our programming language to magically make all problems go away. Any problem that is still there is in principle a shortcoming of the language. ;)
Here's the problem when I get too carried away with anonymous inner classes:
2009/05/27 16:35 1,602 DemoApp2$1.class
2009/05/27 16:35 1,976 DemoApp2$10.class
2009/05/27 16:35 1,919 DemoApp2$11.class
2009/05/27 16:35 2,404 DemoApp2$12.class
2009/05/27 16:35 1,197 DemoApp2$13.class
/* snip */
2009/05/27 16:35 1,953 DemoApp2$30.class
2009/05/27 16:35 1,910 DemoApp2$31.class
2009/05/27 16:35 2,007 DemoApp2$32.class
2009/05/27 16:35 926 DemoApp2$33$1$1.class
2009/05/27 16:35 4,104 DemoApp2$33$1.class
2009/05/27 16:35 2,849 DemoApp2$33.class
2009/05/27 16:35 926 DemoApp2$34$1$1.class
2009/05/27 16:35 4,234 DemoApp2$34$1.class
2009/05/27 16:35 2,849 DemoApp2$34.class
/* snip */
2009/05/27 16:35 614 DemoApp2$40.class
2009/05/27 16:35 2,344 DemoApp2$5.class
2009/05/27 16:35 1,551 DemoApp2$6.class
2009/05/27 16:35 1,604 DemoApp2$7.class
2009/05/27 16:35 1,809 DemoApp2$8.class
2009/05/27 16:35 2,022 DemoApp2$9.class
These are all classes which were generated when I was making a simple application, and used copious amounts of anonymous inner classes -- each class will be compiled into a separate class
file.
The "double brace initialization", as already mentioned, is an anonymous inner class with an instance initialization block, which means that a new class is created for each "initialization", all for the purpose of usually making a single object.
Considering that the Java Virtual Machine will need to read all those classes when using them, that can lead to some time in the bytecode verfication process and such. Not to mention the increase in the needed disk space in order to store all those class
files.
It seems as if there is a bit of overhead when utilizing double-brace initialization, so it's probably not such a good idea to go too overboard with it. But as Eddie has noted in the comments, it's not possible to be absolutely sure of the impact.
Just for reference, double brace initialization is the following:
List<String> list = new ArrayList<String>() {{
add("Hello");
add("World!");
}};
It looks like a "hidden" feature of Java, but it is just a rewrite of:
List<String> list = new ArrayList<String>() {
// Instance initialization block
{
add("Hello");
add("World!");
}
};
So it's basically a instance initialization block that is part of an anonymous inner class.
Joshua Bloch's Collection Literals proposal for Project Coin was along the lines of:
List<Integer> intList = [1, 2, 3, 4];
Set<String> strSet = {"Apple", "Banana", "Cactus"};
Map<String, Integer> truthMap = { "answer" : 42 };
Sadly, it didn't make its way into neither Java 7 nor 8 and was shelved indefinitely.
Experiment
Here's the simple experiment I've tested -- make 1000 ArrayList
s with the elements "Hello"
and "World!"
added to them via the add
method, using the two methods:
Method 1: Double Brace Initialization
List<String> l = new ArrayList<String>() {{
add("Hello");
add("World!");
}};
Method 2: Instantiate an ArrayList
and add
List<String> l = new ArrayList<String>();
l.add("Hello");
l.add("World!");
I created a simple program to write out a Java source file to perform 1000 initializations using the two methods:
Test 1:
class Test1 {
public static void main(String[] s) {
long st = System.currentTimeMillis();
List<String> l0 = new ArrayList<String>() {{
add("Hello");
add("World!");
}};
List<String> l1 = new ArrayList<String>() {{
add("Hello");
add("World!");
}};
/* snip */
List<String> l999 = new ArrayList<String>() {{
add("Hello");
add("World!");
}};
System.out.println(System.currentTimeMillis() - st);
}
}
Test 2:
class Test2 {
public static void main(String[] s) {
long st = System.currentTimeMillis();
List<String> l0 = new ArrayList<String>();
l0.add("Hello");
l0.add("World!");
List<String> l1 = new ArrayList<String>();
l1.add("Hello");
l1.add("World!");
/* snip */
List<String> l999 = new ArrayList<String>();
l999.add("Hello");
l999.add("World!");
System.out.println(System.currentTimeMillis() - st);
}
}
Please note, that the elapsed time to initialize the 1000 ArrayList
s and the 1000 anonymous inner classes extending ArrayList
is checked using the System.currentTimeMillis
, so the timer does not have a very high resolution. On my Windows system, the resolution is around 15-16 milliseconds.
The results for 10 runs of the two tests were the following:
Test1 Times (ms) Test2 Times (ms)
---------------- ----------------
187 0
203 0
203 0
188 0
188 0
187 0
203 0
188 0
188 0
203 0
As can be seen, the double brace initialization has a noticeable execution time of around 190 ms.
Meanwhile, the ArrayList
initialization execution time came out to be 0 ms. Of course, the timer resolution should be taken into account, but it is likely to be under 15 ms.
So, there seems to be a noticeable difference in the execution time of the two methods. It does appear that there is indeed some overhead in the two initialization methods.
And yes, there were 1000 .class
files generated by compiling the Test1
double brace initialization test program.
Best Answer
According to Pippenger [1996], when comparing a Lisp system that is purely functional (and has strict evaluation semantics, not lazy) to one that can mutate data, an algorithm written for the impure Lisp that runs in O(n) can be translated to an algorithm in the pure Lisp that runs in O(n log n) time (based on work by Ben-Amram and Galil [1992] about simulating random access memory using only pointers). Pippenger also establishes that there are algorithms for which that is the best you can do; there are problems which are O(n) in the impure system which are Ω(n log n) in the pure system.
There are a few caveats to be made about this paper. The most significant is that it does not address lazy functional languages, such as Haskell. Bird, Jones and De Moor [1997] demonstrate that the problem constructed by Pippenger can be solved in a lazy functional language in O(n) time, but they do not establish (and as far as I know, no one has) whether or not a lazy functional language can solve all problems in the same asymptotic running time as a language with mutation.
The problem constructed by Pippenger requires Ω(n log n) is specifically constructed to achieve this result, and is not necessarily representative of practical, real-world problems. There are a few restrictions on the problem that are a bit unexpected, but necessary for the proof to work; in particular, the problem requires that results are computed on-line, without being able to access future input, and that the input consists of a sequence of atoms from an unbounded set of possible atoms, rather than a fixed size set. And the paper only establishes (lower bound) results for an impure algorithm of linear running time; for problems that require a greater running time, it is possible that the extra O(log n) factor seen in the linear problem may be able to be "absorbed" in the process of extra operations necessary for algorithms with greater running times. These clarifications and open questions are explored briefly by Ben-Amram [1996].
In practice, many algorithms can be implemented in a pure functional language at the same efficiency as in a language with mutable data structures. For a good reference on techniques to use for implementing purely functional data structures efficiently, see Chris Okasaki's "Purely Functional Data Structures" [Okasaki 1998] (which is an expanded version of his thesis [Okasaki 1996]).
Anyone who needs to implement algorithms on purely-functional data structures should read Okasaki. You can always get at worst an O(log n) slowdown per operation by simulating mutable memory with a balanced binary tree, but in many cases you can do considerably better than that, and Okasaki describes many useful techniques, from amortized techniques to real-time ones that do the amortized work incrementally. Purely functional data structures can be a bit difficult to work with and analyze, but they provide many benefits like referential transparency that are helpful in compiler optimization, in parallel and distributed computing, and in implementation of features like versioning, undo, and rollback.
Note also that all of this discusses only asymptotic running times. Many techniques for implementing purely functional data structures give you a certain amount of constant factor slowdown, due to extra bookkeeping necessary for them to work, and implementation details of the language in question. The benefits of purely functional data structures may outweigh these constant factor slowdowns, so you will generally need to make trade-offs based on the problem in question.
References