You can roughly emulate map/filter by using foldRight (or foldLeft) with cons
as the given reduction function. For example foldRight(f, L, initial)
where L = [x,y,z]
might be expanded to
f(x, f(y, f(z, initial)))
This means that you can't process x
until you've processed f(z, initial)
and then f(y, ...)
. You're creating a dependency that doesn't exist with map/filter.
As for map ordering...
(A) collection.map(a => a.map(b => ...))
(A) takes a collection and then applies the map function to each element, this implies that each element is a "mappable" collection. This inner map will then return the processed list (which is a list) and so each element of collection
will remain a collection. This is how you would map a function onto each element of a list of lists, and the result would again be a list of lists.
(B) collection.map(a => ...).map(b => ...)
(B) processes each element of a list and forms those results into a new list. This list is then processed again with a second map function giving yet another list.
(A) is for processing the inner elements of a list ("sub-elements" if you like). (B) is for processing a list multiple times. If we write (B) concretely as
collection.map(a => f(a)).map(b => g(b))
we can see it is equivalent to
collection.map(a => g(f(a)))
It might help you to write them out as for loops. (A) will use embedded loops where as (B) will use two sequential loops.
This is not the difference between fold and unfold. Neither (A) nor (B) is a fold as the list structure present, however deeply nested, is preserved. Fold creates a scalar from a list, while unfold (not too familiar with it) takes a scalar and produces a list according to some rule.
EDIT: @Giorgio was right to suggest flatMap. It's an interesting variation on map. Let's say we're mapping a list of Xs to a list of Ys, so we pass map a function f:X->Y
. Suppose we have another calculation g
that takes an X but returns multiple Ys g:X->[Y]
. In this case we would use flatMap
. map
takes the results of f
and puts them into a list. flatMap
takes the results of g
and concatenates them together.
Ex. Say we have a list of lists L = [[1,2,3],[4,5,6]]
L.map(a => a.map(b => b * 2))
gives [[2,4,6],[8,10,12]]
. But say we want each number doubled in one list, no sub-lists. Then we would call
L.flatMap(a => a.map(b => b * 2))
which gives [2,4,6,8,10,12]
. Note that our inner function a => a.map(b => b * 2)
takes and returns a list.
Normal order evaluation and call-by-name evaluation are not quite the same thing. In normal order evaluation, the outermost function is evaluated before any of its arguments, and those arguments are evaluated only if needed. In call-by-name evaluation, the arguments are effectively copied into the body of the outermost function and then that function is evaluated. In both cases, the outermost function is technically evaluated before the arguments, but in pure call-by-name, the arguments are evaluated each time they are used (either zero, one, or many times). In normal order, the function arguments are evaluated at the very least only when first needed (typically zero or one times).
Thus normal order evaluation leaves open the possibility of memoizing the arguments as an optimization (sometimes called call-by-need), while call-by-name does not. Thus one could say that call-by-name evaluation is a special case of normal order evaluation. In other words, normal order evaluation refers to the general approach of evaluating a function before its arguments, while call-by-name evaluation refers to a specific technique for implementing normal order evaluation.
As an example, given f(x, y) = sqrt(x*x + y*y)
we could have two ways of implementing f(a+b, c+d)
with normal order evaluation:
Memoized:
t1 = a+b;
t2 = c+d;
return sqrt(t1*t1 + t2*t2);
Call-by-name:
return sqrt((a+b)*(a+b) + (c+d)*(c+d));
As you can see, if the call to f included other function calls (i.e. f(random(1,100), ask_user_for_value())
), the two will have very different behavior. The memoized version will square a single random number and ask the user for a value only once, while the call-by-name version will multiply two random numbers and ask the user for a value twice.
To learn more about these concepts, I recommend reading the evaluation strategy Wikipedia page, and https://cs.stackexchange.com/questions/7702/applicative-order-and-normal-order-in-lambda-calculus.
Best Answer
With dynamic typing, it is fairly straightforward to implement an object-oriented system in terms of closures. I've written about this in my answer to “Modelling Objects as Functions” and expanded on that on my blog. There are some details about open recursion that are necessary to get certain OOP features such as subclassing right, but it's doable. In short, invoking the closure representing an object maps a method name to a method bound to that object, which can then be invoked as an ordinary function. Usage might look like
result = object("methodName")(otherObject)
.However, OOP and static typing are fundamentally at odds. There are various interpretations of OOP, such as OOP-as-message-passing or OOP-as-virtual-dispatch, but a central theme is that we do not statically know the exact runtime type of an expression such as
x
. However, many statically typed OOP languages do assign type bounds to an expression (such asx
is a subtype ofIterable
). When implementing objects in terms of closures in a statically typed language, we quickly run into the problem – when specifying the required method as an argument to the dispatch function, we can't statically know the required signature of the function. In the above example, how do I know that the result ofobject("methodName")
should be a function accepting exactly one parameter?The common solution when implementing objects is to make the object-system unityped (aka. untyped). That is, all method signatures would have the same type regardless of their arity (number of arguments) or of the types of arguments. This also requires that all our objects have the same type in the host language.
Some languages might be able to do better. E.g. in C++, we could pass the requested method type as a template parameter to the dispatch function:
auto result = object<ResultType(ArgumentType)>("methodName")(otherObject)
. A clever implementation could then use different lookup tables to dispatchint()
orint(int)
method types etc. However, we cannot statically guarantee that the requested method exists since it is provided as a string. Another possibility would be to pass in a (visitor-like?)Message
object rather than a string name to the dispatch function, which might allow more type safety in statically typed host languages, but I do not have sufficient experience with implementing OOP in static languages to describe this in more detail.