The author explains that in the "Reading Code from Top to Bottom" subsection of the part that talks about abstractions (hierarchical indentation mine):
[...] we want to be able to read the program as though it were a set of TO paragraphs, each of which is describing the current level of abstraction and referencing subsequent TO paragraphs at the next level down.
- To include the setups and teardowns, we include setups, then we include the test page content, and then we include the teardowns.
- To include the setups, we include the suite setup if this is a suite, then we include the regular setup.
- To include the suite setup, we search the parent hierarchy for the "SuiteSetUp" page and add an include statement with the path of that page.
The code that'd go along with this would be something like this:
public void CreateTestPage()
{
IncludeSetups();
IncludeTestPageContent();
IncludeTeardowns();
}
public void IncludeSetups()
{
if(this.IsSuite())
{
IncludeSuiteSetup();
}
IncludeRegularSetup();
}
public void IncludeSuiteSetup()
{
var parentPage = FindParentSuitePage();
// add include statement with the path of the parentPage
}
And so on. Every time you go deeper down the function hierarchy, you should be changing levels of abstraction. In the example above, IncludeSetups
, IncludeTestPageContent
and IncludeTeardowns
are all at the same level of abstraction.
In the example given in the book, the author's suggesting that the big function should be broken up into smaller ones that are very specific and do one thing only. If done right, the refactored function would look similar to the examples here. (The refactored version is given in Listing 3-7 in the book.)
The keyword for thinking about these things is abstraction.
Abstraction just means deliberately ignoring the details of a system so that you can think about it as a single, indivisible component when assembling a larger system out of many subsystems. It is unimaginably powerful - writing a modern application program while considering the details of memory allocation and register spilling and transistor runtimes would be possible in some idealized way, but it is incomparably easier not to think about them and just use high-level operations instead. The modern computing paradigm relies crucially on multiple levels of abstraction: solid-state electronics, microprogramming, machine instructions, high-level programming languages, OS and Web programming APIs, user-programmable frameworks and applications. Virtually no one could comprehend the entire system nowadays, and there isn't even a conceivable path via which we could ever go back to that state of affairs.
The flip side of abstraction is loss of power. By leaving decisions about details to lower levels, we often accept that they may be made with suboptimal efficiency, since the lower levels do not have the 'Big Picture' and can optimize their workings only by local knowledge, and are not as (potentially) intelligent as a human being. (Usually. For isntance, compiling HLL to machine code is nowadays often done better by machines than by even the most knowledgeable human, since processor architecture has become so complicated.)
The issue of security is an interesting one, because flaws and 'leaks' in the abstraction can often be exploited to violate the integrity of a system. When an API postulates that you may call methods A, B, and C, but only if condition X holds, it is easy to forget the condition and be unprepared for the fallout that happens when the condition is violated. For instance, the classical buffer overflow exploits the fact that writing to memory cells yields undefined behaviour unless you have allocated this particular block of memory yourself. The API only guarantees that something will happen as a result, but in practice the result is defined by the details of the system at the next lower level - which we have deliberately forgotten about! As long as we fulfill the condition, this is of no importance, but if not, an attacker who understands both levels intimately can usually direct the behaviour of the entire system as desired and cause bad things to happen.
The case of memory allocation bugs is particularly bad because it has turned out to be really, really hard to manage memory manually without a single error in a large system. This could be seen as a failed case of abstraction: although it is possible to do everything you need with the C malloc
API, it is simply to easy to abuse. Parts of the programming community now think that this was the wrong place at which to introduce a level boundary into the system, and instead promote languages with automatic memory management and garbage collection, which loses some power, but provides protection against memory corruption and undefined behaviour. In fact, a major reason for still using C++ nowadays is precisely the fact that it allows you to control exactly what resources are acquired and released when. In this way, the major schism between managed and unmanaged languages today can be seen as a disagreement about where precisely to define a layer of abstraction.
The same can be said for many other major alternative paradigms in computing - the issue really crops up all the time where large systems have to be constructed, because we are simply unable to engineer solutions from scratch for the complex requirements common today. (A common viewpoint in AI these days is that the human brain actually does work like that - behaviour arising through feedback loops, massively interconnected networks etc. instead of separate modules and layers with simple, abstracted interfaces between them, and that this is why we have had so little success in simulating our own intelligence.)
Best Answer
It can, but likely won't lead to a problem.
It's just economics. If the vast majority of people lose the ability to understand the underlying architecture, and there is still a huge NEED to understand the underlying architecture, then the ones who do will have jobs and get paid more, while those who don't will only have jobs where that is not needed (and may still get paid more...who knows?).
Is it helpful to know? Absolutely. You'll likely be better. Is it necessary in most cases? No. That's why abstraction is so great, we stand on the shoulders of giants without having to be giants ourselves (but there will always be giants around).