This is only a partial response, so it may be disallowed:
Currently I am simply saying every ARM instructions cost 4 cycles and
THUMB instructions cost 2 cycles.
Actually, if I'm remembering ARM correctly, what's actually happening is that, in a single memory cycle, you can either fetch a single 32-bit ARM instruction, or two 16-bit THUMB instructions. Execution time is, of course, instruction-dependent.
Does instruction cycles vary depending on
which section of the memory it's currently accessing to?
http://nocash.emubase.de/gbatek.htm#cpuinstructioncycletimes According
to the above specification, it says different memory areas have
different waitstates but I don't know what it exactly mean.
Different regions of memory may have more data clients than just the CPU. This is particularly true in graphics/video memory where pixels need to be pulled out of RAM to send off to the display. Unless you use faster (read: more expensive) memory, the CPU and graphics hardware will have to take turns accessing RAM. Thus, depending on how many other RAM clients there are, it may take several cycles before the CPU's turn at RAM arrives.
It may not even be this noble -- the manufacturer may have simply decided that they could get away with slower (read: cheaper) RAM for most of program and data space, and used a smaller block of fast memory only for those things that absolutely needed it.
Furthermore, what are Non-sequential cycle, Sequential cycle, Internal
Cycle, Coprocessor Cycle for?
Sequential versus non-sequential refers to reading memory locations in order versus out-of-order. RAM is arranged as a two-dimensional array of memory cells indexed by row and column address. If you access memory locations in ascending address order (e.g. 0, 1, 2, 3, 4, 5, etc.), then all you need to change (most of the time) is the column address, and each successive read can be satisfied more quickly than if you accessed addresses non-sequentially. Judging from the doc you linked to, it looks like you get a CAS increment for free, but a full RAS+CAS costs an extra cycle.
Internal Cycle means the CPU is going, "Wow, this is complicated; gimme some extra time to figure this out." (The MUL instructions are sharp examples of this.)
Coprocessor Cycle refers to the time required to talk to an ARM coprocessor. I don't know what, if any, ARM coprocessors are in the GBA.
As for implementation: That doesn't lend itself to a single answer. If it were me, I'd model RAM access times independently from instruction execution times. When the CPU reads/writes memory, compare the memory address accessed with the previous memory address the CPU accessed. If they're adjacent (M(current) == M(previous) + 4), then it's free; otherwise add a one-cycle penalty. Then add the delays imposed by the memory region you're accessing.
It depends.
Some (CISC) CPUs have byte-wise loads that can address individual bytes so the byte of interest is the low-order 8-bits on the bus; the rest of the bits are masked off.
Many RISC CPUs will do word-load, barrel shift, while others will do word-load, bit shift and in the middle, are ones that do word-load, byte shift.
Some CPUs will do consecutive word-loads when a two-byte value spans a 32-bit boundary, shifting and masking the words together.
CPU families may do different implementations depending on the particular processor model. That explains why there is no description of the implementation; it's a decision only the vendor cares about.
As for performance, you will just have to test it on the particular CPU and memory configurations you care about.
Best Answer
It probably refers to pipelining, that is, parallel (or semi-parallel) execution of instructions. That's the only scenario I can think of where it does not really matter how long something takes, as long as you can have enough of them running in parallel.
So, the CPU may fetch one instruction, (step 1 in the table above,) and then as soon as it proceeds to step 2 for that instruction, it can at the same time (in parallel) start with step 1 for the next instruction, and so on.
Let's call our two consecutive instructions A and B. So, the CPU executes step 1 (fetch) for instruction A. Now, when the CPU proceeds to step 2 for instruction A, it cannot yet start with step 1 for instruction B, because the program counter has not advanced yet. So, it has to wait until it has reached step 3 for instruction A before it can get started with step 1 for instruction B. This is the time it takes to start another instruction, and we want to keep this at a minimum, (start instructions as quickly as possible,) so that we can be executing in parallel as many instructions as possible.
CISC architectures have instructions of varying lengths: some instructions are only one byte long, others are two bytes long, and yet others are several bytes long. This does not make it easy to increment the program counter immediately after fetching one instruction, because the instruction has to be decoded to a certain degree in order to figure out many bytes long it is. On the other hand, one of the primary characteristics of RISC architectures is that all instructions have the same length, so the program counter can be incremented immediately after fetching instruction A, meaning that the fetching of instruction B can begin immediately afterwards. That's what the author means by starting instructions quickly, and that's what increases the number of instructions that can be executed per second.
In the above table, step 2 says "Change the program counter to point to the following instruction" and step 3 says "Determine the type of instruction just fetched." These two steps can be in that order only on RISC machines. On CISC machines, you have to determine the type of instruction just fetched before you can change the program counter, so step 2 has to wait. This means that on CISC machines the next instruction cannot be started as quickly as it can be started on a RISC machine.