It probably refers to pipelining, that is, parallel (or semi-parallel) execution of instructions. That's the only scenario I can think of where it does not really matter how long something takes, as long as you can have enough of them running in parallel.
So, the CPU may fetch one instruction, (step 1 in the table above,) and then as soon as it proceeds to step 2 for that instruction, it can at the same time (in parallel) start with step 1 for the next instruction, and so on.
Let's call our two consecutive instructions A and B. So, the CPU executes step 1 (fetch) for instruction A. Now, when the CPU proceeds to step 2 for instruction A, it cannot yet start with step 1 for instruction B, because the program counter has not advanced yet. So, it has to wait until it has reached step 3 for instruction A before it can get started with step 1 for instruction B. This is the time it takes to start another instruction, and we want to keep this at a minimum, (start instructions as quickly as possible,) so that we can be executing in parallel as many instructions as possible.
CISC architectures have instructions of varying lengths: some instructions are only one byte long, others are two bytes long, and yet others are several bytes long. This does not make it easy to increment the program counter immediately after fetching one instruction, because the instruction has to be decoded to a certain degree in order to figure out many bytes long it is. On the other hand, one of the primary characteristics of RISC architectures is that all instructions have the same length, so the program counter can be incremented immediately after fetching instruction A, meaning that the fetching of instruction B can begin immediately afterwards. That's what the author means by starting instructions quickly, and that's what increases the number of instructions that can be executed per second.
In the above table, step 2 says "Change the program counter to point to the following instruction" and step 3 says "Determine the type of instruction just fetched." These two steps can be in that order only on RISC machines. On CISC machines, you have to determine the type of instruction just fetched before you can change the program counter, so step 2 has to wait. This means that on CISC machines the next instruction cannot be started as quickly as it can be started on a RISC machine.
In most cases, yes, cycle time for each stage is fixed. There are some exceptions, depending on processor. But the description you give is vastly over-simplified. Modern processors are organised in pipelines, so that one stage of execution of one instruction can occur at the same time as others. While some processors use a 6-stage pipeline like you describe, they are a small minority. Most modern processors split the operation into many more stages, each of which takes once cycle. For example, Intel Core processors of the current generation have 19 stages, each of which takes a single cycle. In some circumstances an instruction may skip one of them. Usually, multiple instructions are executed in different stages simultaneously, but some instructions in some circumstances will prevent other operations progressing (e.g. branch mispredictions, or if no instructions are ready because they need to wait for data that has not been produced yet). Also, the processor core may have multiple pipelines so multiple instructions run completely in parallel, and in some architectures not all pipelines are capable of execution of all instruction types. Instruction fetch and decoding is shared between all pipelines, and in many cases can handle many instructions per cycle. In modern processors based on CISC instructions like Intel x86 the instructions are translated into RISC-like micro instructions before execution, so one program instruction may translate to multiple instructions in the pipeline (or vice versa). Determining the actual performance in real world situations is extremely difficult.
Best Answer
This is only a partial response, so it may be disallowed:
Actually, if I'm remembering ARM correctly, what's actually happening is that, in a single memory cycle, you can either fetch a single 32-bit ARM instruction, or two 16-bit THUMB instructions. Execution time is, of course, instruction-dependent.
Different regions of memory may have more data clients than just the CPU. This is particularly true in graphics/video memory where pixels need to be pulled out of RAM to send off to the display. Unless you use faster (read: more expensive) memory, the CPU and graphics hardware will have to take turns accessing RAM. Thus, depending on how many other RAM clients there are, it may take several cycles before the CPU's turn at RAM arrives.
It may not even be this noble -- the manufacturer may have simply decided that they could get away with slower (read: cheaper) RAM for most of program and data space, and used a smaller block of fast memory only for those things that absolutely needed it.
Sequential versus non-sequential refers to reading memory locations in order versus out-of-order. RAM is arranged as a two-dimensional array of memory cells indexed by row and column address. If you access memory locations in ascending address order (e.g. 0, 1, 2, 3, 4, 5, etc.), then all you need to change (most of the time) is the column address, and each successive read can be satisfied more quickly than if you accessed addresses non-sequentially. Judging from the doc you linked to, it looks like you get a CAS increment for free, but a full RAS+CAS costs an extra cycle.
Internal Cycle means the CPU is going, "Wow, this is complicated; gimme some extra time to figure this out." (The MUL instructions are sharp examples of this.)
Coprocessor Cycle refers to the time required to talk to an ARM coprocessor. I don't know what, if any, ARM coprocessors are in the GBA.
As for implementation: That doesn't lend itself to a single answer. If it were me, I'd model RAM access times independently from instruction execution times. When the CPU reads/writes memory, compare the memory address accessed with the previous memory address the CPU accessed. If they're adjacent (M(current) == M(previous) + 4), then it's free; otherwise add a one-cycle penalty. Then add the delays imposed by the memory region you're accessing.