You seem to be drawing a line between declaring things and instructing a machine. There is no such hard-and-fast separation. Because the machine that's being instructed in imperative programming need not be physical hardware, there is much freedom for interpretation. Almost everything can be seen as an explicit program for the right abstract machine. For example, you could see CSS as a rather high level language for programming a machine that primarily resolves selectors and sets attributes of the thus-selected DOM objects.
The question is whether such a perspective is sensible, and conversely, how closely the sequence of instructions resembles a declaration of the result being computed. For CSS, the declarative perspective is clearly more useful. For C, the imperative perspective is clearly predominant. As for languages as Haskell, well…
The language has specified, concrete semantics. That is, of course one can interpret the program as a chain of operations. It doesn't even take too much effort to choose the primitive operations so that they map nicely onto commodity hardware (this is what the STG machine and other models do).
However, the way Haskell programs are written, they frequently can sensibly be read as a description of the result to be computed. Take, for example, the program to compute the sum of the first N factorials:
sum_of_fac n = sum [product [1..i] | i <- ..n]
You can desugar this and read it as a sequence of STG operations, but far more natural is to read it as a description of the result (which is, I think, a more useful definition of declarative programming than “what to compute”): The result is the sum the products of [1..i]
for all i
= 0, …, n. And that is far more declarative than almost any C program or function.
How can varargs be implemented? We need some mechanism to signal the end of the argument list. This can either be
- a special terminator value, or
- the length of the vararg list passed as an extra parameter.
Both of these mechanisms can be used in the context of currying to implement varargs, but proper typing becomes a major issue. Let's assume that we are dealing with a function sum: ...int -> int
, except that this function uses currying (so we actually have a type more like sum: int -> ... -> int -> int
, except that we don't know the number of arguments).
Case: terminator value: Let end
be the special terminator, and T
be the type of sum
. We now know that applied to end
the function returns: sum: end -> int
, and that applied to an int we get another sum-like function: sum: int -> T
. Therefore T
is the union of these types: T = (end -> int) | (int -> T)
. By substituting T
, we get various possible types such as end -> int
, int -> end -> int
, int -> int -> end -> int
, etc. However, most type systems do not accommodate such types.
Case: explicit length: The first argument to a vararg function is the number of varargs. So sum 0 : int
, sum 1 : int -> int
, sum 3 : int -> int -> int -> int
etc. This is supported in some type systems and is an example of dependent typing. Actually, the number of arguments would be a type parameter and not a regular parameter – it would not make sense for the arity of the function to depend on a runtime value, s = ((sum (floor (rand 3))) 1) 2
is obviously ill-typed: this evaluates to either s = ((sum 0) 1) 2 = (0 1) 2
, s = ((sum 1) 1) 2 = 1 2
, or s = ((sum 2) 1) 2 = 3
.
In practice, none of these techniques should be used since they are error-prone, and don't have a (meaningful) type in common type systems. Instead, just pass a list of values as one paramter: sum: [int] -> int
.
Yes, it is possible for an object to appear as both a function and a value, e.g. in a type system with coercions. Let sum
be a SumObj
, which has two coercions:
coerce: SumObj -> int -> SumObj
allows sum
to be used as a function, and
coerce: SumObj -> int
allows us to extract the result.
Technically, this is a variation of the terminator value case above, with T = SumObj
, and coerce
being a un-wrapper for the type. In many object-oriented languages, this is trivially implementable with operator overloading, e.g. C++:
#include <iostream>
using namespace std;
class sum {
int value;
public:
explicit sum() : sum(0) {}
explicit sum(int x) : value(x) {}
sum operator()(int x) const { return sum(value + x); } // function call overload
operator int() const { return value; } // integer cast overload
};
int main() {
int zero = sum();
cout << "zero sum as int: " << zero << '\n';
int someSum = sum(1)(2)(4);
cout << "some sum as int: " << someSum << '\n';
}
Best Answer
It's important to note that Moore's Law does not talk about processor speeds. It talks about transistor density.
In fact, we hit the wall on clock rates quite a long time ago, and clocks have generally been decreasing since the P4. There have been additional performance gains with stuff like OOO execution and other mechanisms to exploit ILP, but these days the extra density is going towards more cache and more cores, rather than straightforwardly boosting the speed of each core.
Memory, on the other hand, is much more straightforward to implement - a higher transistor density basically means you can pack more memory cells into the same package, without having to do any complex redesign.
Basically, memory scales exceptionally well with increasing transistor density. Processors, not so much.