When & why did pointers start to become viewed as risky

historypointers

It seems that there has been a gradual shift in thinking about the use of pointers in programming languages such that it became generally accepted that pointers were considered risky (if not outright "evil" or similar aggrandizement).

What were the historical developments were for this shift in thinking? Were there specific, seminal events, research, or other developments?

For instance, a superficial look back at the transition from C to C++ to Java seems to show a trend to supplement and then entirely replace pointers with references. However the real chain of events was probably much more subtle & complex than this, and not nearly so sequential. The features which made it into those mainstream languages may have originated elsewhere, perhaps long before.

Note: I am not asking about the actual merits of pointers vs. references vs. something else. My focus is on the rationales for this apparent shift.

Best Answer

The rationale was the development of alternatives to pointers.

Under the hood, any pointer/reference/etc is being implemented as an integer containing a memory address (aka pointer). When C came out, this functionality was exposed as pointers. This meant that anything the underlying hardware could do to address memory could be done with pointers.

This was always "dangerous," but danger is relative. When you're making a 1000 line program, or when you have IBM-grade software quality procedures in place, this danger could be easily addressed. However, not all software was being developed that way. As such, a desire for simpler structures came forth.

If you think about it, an int& and a int* const really have the same level of safety, but one has much nicer syntax than the other. int& could also be more efficient because it could refer to an int stored in a register (anachronism: this was true in the past, but modern compilers are so good at optimizing that you can have a pointer to an integer in a register, as long as you never use any of the features that would require an actual address, like ++)

As we move to Java, we move into languages which provide some security guarantees. C and C++ provided none. Java guarantees that only legal operations get executed. To do this, java did away with pointers entirely. What they found is that the vast majority of pointer/reference operations done in real code were things that references were more than sufficient for. Only in a handful of cases (such as fast iteration through an array) were pointers truly needed. In those cases, java takes a runtime hit to avoid using them.

The move has not been monotonic. C# reintroduced pointers, though in a very limited form. They are marked as "unsafe," meaning they cannot be used by untrusted code. They also have explicit rules about what they can and cannot point to (for example, it's simply invalid to increment a pointer past the end of an array). However, they found there were a handful of cases where the high performance of pointers were needed, so they put them back in.

Also of interest would be the functional languages, which have no such concept at all, but that's a very different discussion.