Java – Do the young minds need to learn the pointer concepts

cjavapointersprogramming-languages

Why did the C master Dennis Ritchie introduce pointers in C? And why did the other programming languages like VB.NET or Java or C# eliminate them? I have found some points in Google, and I want to listen your comments too. Why are they eliminating pointer concepts in modern languages?

People say C is the basic language and pointers is the concept that makes C powerful and outstanding and makes C still to compete with more modern languages. Then why did they eliminate pointers in more modern languages?

Do you think knowledge of pointers is still important for new programmers? People are using VB.NET or Java these days, which supports more highly advanced features than C (and does not use any pointer concepts) and many people as I see now (my friends) choose these languages ignoring C as they support advanced features. I tell them to start with C. They say it's a waste to learn the concepts of pointers when you're doing the advanced things in VB.NET or Java which are not possible in C.

What do you think?

Updated:

The comments I read on Google are:

  1. The earlier computers were too slow and not optimized.

  2. Using pointers makes it possible to access an address directly and this saves time instead of making a copy of it in function calls.

  3. Security is significantly worse using pointers, and that's why Java and C# did not include them.

These and some more what I found. I still need some valuable answers. That would be greatly appreciated.

Best Answer

Back in those days, developers were working much closer to the metal. C was essentially a higher level replacement for assembly, which is almost as close to the hardware as you can get, so it was natural you needed pointers to be efficient in solving coding problems. However, pointers are sharp tools, which can cause great damage if used carelessly. Also, direct use of pointers open up the possibility to many security problems, which weren't an issue back then (in 1970, the internet consisted of about a few dozen machines across a couple of universities, and it was not even called like that...), but became more and more important since. So nowadays higher level languages are consciously designed to avoid raw memory pointers.

Saying that "advanced things done in VB.Net or Java are not possible in C" shows a very limited point of view, to say the least :-)

First of all, all of these languages (even assembly) are Turing complete so in theory whatever is possible in one language, is possible in all. Just think about what happens when a piece of VB.Net or Java code is compiled and executed: eventually, it is translated into (or mapped to) machine code, because that is the only thing which the machine understands. In compiled languages like C and C++, you can actually get the full body of machine code equivalent to the original higher level source code, as one or more executable files/libraries. In VM based languages, it is more tricky (and may not even be possible) to get the entire equivalent machine code representation of your program, but still eventually it is there somewhere, within the deep recesses of the runtime system and the JIT.

Now, of course, it is an entirely different question whether some solution is feasible in a specific language. No sensible developer would start writing a web app in assembly :-) But it is useful to bear in mind that most or all of those higher level languages are built on top of a huge amount of runtime and class library code, a large chunk of which is implemented in a lower level language, typically in C.

So to get to the question,

Do you think knowledge on pointers to the young people [...] is important?

The concept behind pointers is indirection. This is a very important concept and IMHO every good programmer should grasp it on a certain level. Even if someone is working solely with higher level languages, indirection and references are still important. Failing to understand this means being unable to use a whole class of very potent tools, seriously limiting one's problem solving ability in the long run.

So my answer is yes, if you want to become a truly good programmer, you must understand pointers too (as well as recursion - this is the other typical stumbling block for budding developers). You may not need to start with it - I don't think C is optimal as a first language nowadays. But at some point one should get familiar with indirection. Without it, we can never understand how the tools, libraries and frameworks we are using actually work. And a craftsman who doesn't understand how his/her tools work is a very limited one. Fair enough, one may get a grasp of it in higher level programming languages too. One good litmus test is correctly implementing a doubly linked list - if you can do it in your favourite language, you can claim you understand indirection well enough.

But if not for anything else, we should do it to learn respect for the programmers of old who managed to build unbelievable things using the ridiculously simple tools they had (compared to what we have now). We are all standing on the shoulders of giants, and it does good to us to acknowledge this, rather than pretending we are the giants ourselves.