No, the textbook is right.
Remember the difference between resistors in parallel and in series? This is pretty much the same.
The difference between 2.17 and 2.19 is the inverters. Those inverters are an extra step in the gate series, therefore 2.19 has a series of 3 and 2.17 one of 2.
You can see this from the input side of view: in figure 2.17, the input will first go through G1 or G2, and after that G3. That's two steps. G1 and G2 are symmetric, so they are parallel. But since the output of G1 is connected to the input of G3, they are in series.
In figure 2.19 though, the input will be inverted first, so that's en extra step. The inverters are symmetric thus stand parallel, G1 and G2 too, and G3 is alone.
You're post is naive, but that is not necessarily a bad thing for two reasons: 1. There are others in your position that can benefit from your question. And 1. Sometimes people with decades of experience need to revisit these sorts of subjects from time to time to refresh their memory of what is important in circuits.
There are some terms that EE's use that are taught in school and in textbooks, but are rarely used professionally. Sequential logic is one of them. The professional term is "state machine". A state machine is essentially the guts of what you think of as sequential logic.
A "state" is simply the current condition of something. The state of a counter is the count value itself. A state of a stoplight is Red, Yellow, or Green.
When you say "memory is the ability to store and retrieve past signals", you are correct-- but nobody talks like that. We say that the state is stored. It is a minor point, but an important one. Storing a past signal implies that you are storing a signal that changes over time. Storing a state is storing the instantaneous value of the state. Take that little bit of knowledge and tuck it away in your brain for later, when it will make sense to you.
For us, there are two basic types of logic circuits: combinatorial and memory. Combinatorial logic is just logic where the outputs are dependent only on the inputs. It is a cluster of gates with no feedback paths (where gates downstream do not feed inputs of upstream gates). Memory is the opposite of combinatorial logic, in that it stores a value or state for use later. Basic building blocks for memory are the flip-flops and latches. Actual RAM can also be used to store state values, but that is more advanced use.
The core of a state machine (or what you are calling sequential logic) consists of a single block of combinatorial logic, and a chunk of memory to store the output of the combinatorial logic. The output of the memory is fed back into the combinatorial logic. If you are designing a counter, then the combinatorial logic might take the input and add 1 to it. The memory will save that +1 value for the next clock.
Usually connected to state machines is another chunk of combinatorial logic and possibly some more memory to handle the outputs of the state machine (different from the state value itself). An example of this would be an extra signal from our counter that goes high every time the counter is equal to 4.
Where this extra combinatorial logic (and maybe more memory) is in relation to the core combinatorial/memory logic is what determines if this is a Mealey or Moore state machine. I bring up the Mealey and Moore terms only because this is another example of something that is only taught in schools and is almost never used professionally.
But with all this talk about "memory", we have a problem. The way this term is used in this discussion is different than how it is normally used. When you say "memory" to most people they think of RAM and ROM. But memory in this context is normally flip-flops and latches. Usually D-Flip-Flops. The DFF's in a counter will hold one word, and only one word. RAM, on the other hand, will store many words at a time. It is hard to tell from your question, but I think that you are confusing RAM with Flip-Flops and Latches.
Now on to your question: If we can make memory with combinational circuits, why are sequential elements so highly regarded as fundamental to memory?
You can make memory with gates, and you can make combinatorial logic with gates. But combinatorial logic is not memory. In fact, the definition of combinatorial logic is "logic without memory". But almost every useful circuit is made from both memory and combinatorial logic.
What I do not understand from your question is what kind of memory are you referring to. But ultimately it doesn't matter because sequential elements is not fundamental to either kind of memory. It is the opposite, in fact. Memory is fundamental to Sequential logic (a.k.a. state machines).
When looking at state machines, sequential logic, synchronous logic, and the like it can be useful to break up the logic into combinatorial logic and flip-flops. Don't break it up in the actual design, but break up how you think of the circuit. This will help you in identifying the parts that matter. It will also help you later on when you have to start thinking about signal timing, clocks, and all of that stuff.
I also advise that you ignore RAM/ROM for now until you understand the rest of this. There is no sense in complicating things at this stage.
Best Answer
Memories and peripheral IC's will typically have many locations that can be selected for reading or writing; in the example above, the 2K devices (EPROM and RAM) containing 2\$^{11}\$ (2048) memory cells require 11 address bits A0 thru A10. These are fed directly into the chip and are internally decoded to select the desired memory location or register. These address lines are not shown in the partial schematics above.
Computer boards with external memory and peripherals connected to the processor may have several chips that need to be addressed. Only one can be connected to the data bus of the computer at a time. Which one is enabled is done via a chip select (CS) line. These lines are normally inverted; i.e. the chip is enabled if the line is low (logic 0), and disabled when the line is high (logic 1). So they are written as \$\small \overline{\text{CS}}\$ to indicate this.
With full address decoding, all the bits of the address bus that are not used to address the internal locations mentioned above are decoded to select a particular chip via its CS line. So for a 16-bit address bus (64K memory map), five lines (A11 thru A15) will be used for the chip select decode and the remaining 11 (A0 thru A10) used for the address bus fed into the chip. The chip will respond to only as many addresses as there are internal memory locations inside the chip. So for example, a 2K memory chip may have addresses 0x0000 thru 0x7FFF (2048 altogether) or some other 2K range; any addresses outside of the 2K addresses will have no effect.
With partial address decoding, some of the address lines which would normally be used to enable the chip select line are left unconnected as far as the address decoding goes; these are called "don't cares". Each line that is specified as a don't care doubles the number of addresses that can select the chip. For example, if A11 was left out of the decoding for the EPROM, it would still respond to address 0x0000 thru 0x07FF, but it would also respond to addresses 0x0800 thru 0x0FFF. So 0x0123 and 0x0923 would address the same internal location.
Why use partial address decoding? It sometimes saves some logic gates. That's really the only reason. In the example (a) above, the full addressed example (a) required a NOR gate and an inverter for the EPROM; in the second example (b) no logic was required. However partial address decoding is usually a bad idea since it wastes space in your memory map.
The top example (a) is fully decoded; the decoding look like this:
The A's indicate decoding external to the CS lines, and are address bits in the case of the EPROM and RAM, or assumed to be register selects in the case of the PIO device.
The 2K devices (EPROM and RAM) require 11 address bits A0 thru A10. The top five bits A11 thru A15 are fully decoded to enable the CS lines. So the address range of the EPROM is 0x0000 thru 0x07FF. The address range of the RAM is 0x8000 thru 0x87FF.
The PIO CS is selected when bits A2 thru A15 are high. So the address range is just 0xFFFC thru 0xFFFF.
Looking at the logic equations, where \$\cdot\$ = AND, + = OR, and overbar = NOT:
\$\small CS_{EPROM}= \overline{\small A_{15}+A_{14}+A_{13}+A_{12}+A_{11}}\$ which by De Morgan's laws is the same as:
\$\small CS_{EPROM}=\overline{\small {A_{15}}}\cdot\overline{\small {A_{14}}}\cdot\overline{\small {A_{13}}}\cdot\overline{\small {A_{12}}}\cdot\overline{\small {A_{11}}}\,\,\$(i.e. CS enabled when \$\small A_{15}\$ thru \$\small A_{11}\$ are all low).
Although using a NOR to do an AND'ing function looks odd, doing it this way saved four inverters (NOR and one inverter instead of five inverters and a NAND). But they could have used an OR instead of the NOR and gotten rid of the inverter.
\$\small CS_{RAM}\,\,\,\,\,=\overline{\small \overline{A_{15}}+A_{14}+A_{13}+A_{12}+A_{11}}\$ which is the same as:
\$\small CS_{RAM}\,\,\,\,\,=\small {A_{15}}\cdot\overline{\small {A_{14}}}\cdot\overline{\small {A_{13}}}\cdot\overline{\small {A_{12}}}\cdot\overline{\small {A_{11}}}\,\,\$(i.e. CS enabled when \$\small A_{15}\$ is high and \$\small A_{14}\$ thru \$\small A_{11}\$ are all low).
Doing it this way saved three inverters (NOR and two inverters instead of five inverters and a NAND). But they could have used an OR instead of the NOR and gotten rid of one of the two inverters.
\$\small CS_{PIO}\,\,\,\,\,\,\,=\small \overline{\overline{A_{15}\cdot A_{14}\cdot A_{13}\cdot A_{12}\cdot A_{11}\cdot A_{10}\cdot A_{9}\cdot A_{8}} + \overline{A_{7}\cdot A_{6}\cdot A_{5}\cdot A_{4}\cdot A_{3}\cdot A_{2}}}\$ which is the same as:
\$\small CS_{PIO}\,\,\,\,\,\,\,=\small A_{15}\cdot A_{14}\cdot A_{13}\cdot A_{12}\cdot A_{11}\cdot A_{10}\cdot A_{9}\cdot A_{8}\cdot A_{7}\cdot A_{6}\cdot A_{5}\cdot A_{4}\cdot A_{3}\cdot A_{2}\,\,\$ (i.e. CS enabled when \$\small A_{15}\$ thru \$\small A_{2}\$ are all high).
In the last case, I don't know why they didn't use two AND gates and a NAND, instead of the two NAND gates and an OR; the first would have been more straightforward.
The bottom example (b) is partially decoded; the decoding looks like this (where the x's indicate "don't care" lines -- note the top example has no x's, that's why it is considered fully decoded):
Once again The 2K devices (EPROM and RAM) require 11 address bits A0 thru A10. Only the top bit is used to enable the CS line of the EPROM, and the top two bits are used to select the CS lines of the RAM and PIO.
Due to the partial decoding, the EPROM can be addressed using the following ranges 0x0000 thru 0x7FFF, or broken up into 2K blocks,
0x0000 thru 0x07FF, 0x0800 thru 0x0FFF, 0x1000 thru 0x17FF, 0x1800 thru 0x1FFF, 0x2000 thru 0x27FF, 0x2800 thru 0x2FFF, 0x3000 thru 0x37FF, 0x3800 thru 0x3FFF, 0x4000 thru 0x47FF, 0x4800 thru 0x4FFF, 0x5000 thru 0x57FF, 0x5800 thru 0x5FFF, 0x6000 thru 0x67FF, 0x6800 thru 0x6FFF, 0x7000 thru 0x77FF, 0x7800 thru 0x7FFF
The RAM is almost the same, except the high bit A15 is 1 and A14 is 0. (A14 differentiates the RAM from the PIO, which also has the high bit set.) It can be addressed using the following ranges 0x8000 thru 0xBFFF, or broken up into 2K blocks,
0x8000 thru 0x87FF, 0x8800 thru 0x8FFF, 0x9000 thru 0x97FF, 0x9800 thru 0x9FFF, 0xA000 thru 0xA7FF, 0xA800 thru 0xAFFF, 0xB000 thru 0xB7FF, 0xB800 thru 0xBFFF
The PIO chip is addressed with the top two address bits high. Assuming the PIO still has only two bits of register addressing, A0 and A1, then bits A2 thru A13 are not decoded, allowing a range of 0xC000 thru 0xFFFF. I'm not going to write out all of the ranges (they're 4096 of them), but they start out as 0xC000 thru 0xC003, and the last range is 0xFFFC thru 0xFFFF.
Looking at the logic equations,
\$\small CS_{EPROM}=\overline{\small {A_{15}}}\,\,\,\,\,\,\$(i.e. CS enabled when \$\small A_{15}\$ is low).
\$\small CS_{RAM}\,\,\,\,\,=\small A_{15}\cdot\overline{\small {A_{14}}}\,\,\$(i.e. CS enabled when \$\small A_{15}\$ is high and \$\small A_{14}\$ is low).
\$\small CS_{PIO}\,\,\,\,\,\,\,=\small A_{15}\cdot A_{14}\,\,\$(i.e. CS enabled when \$\small A_{15}\$ and \$\small A_{14}\$ are both high).