When Java was first designed it was considered appropriate to leave out anonymous functions. I can think of two reasons (but they might be different from the official ones):
- Java was designed as an object-oriented language without functions, so it was not very natural to have anonymous functions in a language without functions. Or at least, this would have influenced the design of the language a lot.
- Anonymous functions were not popular in the programmer communities that Java was meant to attract (C, C++, Pascal?). Even now, many Java programmers seem to consider these features quite exotic (but this will probably change very quickly with Java 8).
In the following years, as Robert Harvey has explained, the policy of Sun was always to keep Java backward compatible and very stable.
On the other hand, other competing languages have emerged (the most important being C#, which was born as a Java clone and then took its own development direction).
Competing languages have put Java under pressure for two reasons:
Expressive power
New features can make certain programming idioms easier to write, making the language more attractive for programmers. Normally the set of features provided by a language is a compromise between expressive power, language complexity, design coherence: adding more features makes a language more expressive but also more complex and difficult to master.
Anyway, in the last few years Java's competitors added lots of new features that Java did not have, and this can be considered an advantage.
Hype
Yes, unfortunately this is a factor in technology choice, at least from what I can see in my daily experience as a programmer: a tool must have a certain feature, even if most members of the team don't know how to use it and those who would be able to use it don't need it most of the times.
Hype can be even more important for non technical people like managers, who may be the ones who decide the platform for a certain project. Managers sometimes only remember some keywords like lambda, parallelism, multicore, functional programming, cloud computing, ... If our technology of choice has a green mark
on each item of the list, then we are up to date.
So IMO for some time Java has been caught between
- the original policy of language stability and design simplicity, a huge code base and developer community on one hand, and
- the pressure of competing languages that could attract Java programmers, C# at first, and then Scala, Clojure, F# (I name the ones I am aware of, there may be others).
Eventually Oracle decided to upgrade Java to make it more competitive.
In my opinion, the new features address especially Java programmers that might be tempted to switch to C# and who see other languages like Scala and Clojure as too different from Java.
On the other hand, developers who have some experience with functional programming and still want to use the JVM have probably already switched to
Scala, Clojure, or another language.
So the new Java 8 features will make Java more powerful as a language and the declared focus is concurrent and parallel programming, but the upgrade seems to address also the marketing aspects (Mark Reinhold, chief architect for Java at Oracle, said: "Some would say adding Lambda expressions is just to keep up with the cool kids, and there’s some truth in that, but the real reason is multicore processors; the best way to handle them is with Lambda", see this article).
So, yes, many (all) Java 8 features were well known already, but why and when a feature is added to a language depends on many factors: target audience,
existing community, existing code base, competitors, marketing, etc.
EDIT
A short note regarding "... I had read about streams in SIC (1996).": do you mean that you need Java 8 lambdas to implement streams? Actually you can implement them using anonymous inner classes.
Quote
Quote returns data which you should not modify, and which may share structure between themselves.
E.g. if you have a file which contains
(define l1 '(1 2 3))
(define l2 '(4 2 3))
then the compiler is permitted to allocate l1
and l2
in a way that they share their common tail (cdr l1)
and (cdr l2
) and/or in the read-only memory.
Modification of such lists is undefined behavior.
Do not do it.
list
list
and cons
create fresh objects (different from everything which already exist), they allocate and populate memory. You own them - you can modify them as much as you want.
Your case
Both your set-car!
calls are wrong - you are modifying read-only data and thus triggering undefined behavior (i.e., you are lucky your computer did not blow up in your face :-).
Specifically, in the first case, ls1
, you get what you would get if you did the right thing, i.e.,
(define ls1
(cons (cons 1 2)
(cons 1 2)))
while in the second case the implementation allocated only one cons cell (1 . 2)
and re-used it in creating ls2
, i.e., you see what you would see if you evaluate the following (legal) code:
(define ls2
(let ((l (cons 1 2)))
(cons l l)))
If there were print-circle in scheme, you could see the data re-use:
[1]> (let ((l (cons 1 2)))
(cons l l))
((1 . 2) 1 . 2)
[2]> (setq *print-circle* t)
T
[3]> (let ((l (cons 1 2)))
(cons l l))
(#1=(1 . 2) . #1#)
Binding
x
is bound to a value means that the name x
refers to the object, the same way in all languages.
The difference in Lisp/Scheme is what the object is.
Here it is the first cons cell of the list - as you have probably seen many times, a (linked) list is a chain of cons cell, where car
contains the value and cdr
contains the next cons cell in the list.
Best Answer
When type theorists say "typed", they mean a what most programmers call statically typed. This is due to a fundamental divide: Type theorists care about proofs and related beasts, and hence care about statements that apply to all possible executions of a program. The mere notion of a "runtime type tag" doesn't make sense to them. If a type theorist says "this has type
int
" they mean "I can formally proof that this only ever takes onint
values".In contrast, an untyped language is one where you can't create such a proof, because the language doesn't give you enough guarantees/information. This is the original meaning of "untyped" and it's actively used by (a minority of) people talking about type systems online. An alternative term is "unityped", because if you have to assign a type, you only have the trivial type "any value whatsoever" available.
The simply typed lambda calculus is typed in this sense, it has a static type system as you'd say. In the same sense, both Scheme and the untyped lambda calculus are untyped.
Programmers, on the other hand, primarily want to know what kind of value is in some memory location; whether this knowledge is innate in the source code for a compiler to explore and make use of, or whether it is determined at run time, is a separate decision.
In accordance with their understanding of "type", programmers have a different definition of "untyped": A system that has neither static information nor runtime tags, because there is effectively only "one type" to choose from (e.g. in Tcl, everything is a string). In this sense, the untyped lambda calculus is still untyped (everything is a function), but Scheme is, as you note, typed (though dynamically).