Electronic – How different are 8-bit microcontrollers from 32-bit microcontrollers when it comes to programming them

microcontrollerprogramming

Right, so we have 8-bit, 16-bit and 32-bit microcontrollers in this world at the moment. All of them are often used. How different is it to program 8-bit and 16-bit micrcontrollers? I mean, does it require different technique or skill? Lets take microchip for example. What new things does a person need to learn if they want to transition from 8-bit microcontrollers to 32-bit microcontrollers?

Best Answer

In general, going from 8 to 16 to 32 bit microcontrollers means you will have fewer restraints on resources, particularly memory, and the width of registers used for doing arithmetic and logical operations. The 8, 16, and 32-bit monikers generally refers to both the size of the internal and external data busses and also the size of the internal register(s) used for arithmetic and logical operations (used to be just one or two called accumulators, now there are usually register banks of 16 or 32).

I/O port port sizes will also generally follow the data bus size, so an 8-bit micro will have 8-bit ports, a 16-bit will have 16-bit ports etc.

Despite having an 8-bit data bus, many 8-bit microcontrollers have a 16-bit address bus and can address 2^16 or 64K bytes of memory (that doesn't mean they have anywhere near that implemented). But some 8-bit micros, like the low-end PICs, may have only a very limited RAM space (e.g. 96 bytes on a PIC16).

To get around their limited addressing scheme, some 8-bit micros use paging, where the contents of a page register determines one of several banks of memory to use. There will usually be some common RAM available no matter what the page register is set to.

16-bit microcontroller are generally restricted to 64K of memory, but may also use paging techniques to get around this. 32-bit microcontrollers of course have no such restrictions and can address up to 4GB of memory.

Along with the different memory sizes is the stack size. In the lower end micros, this may be implemented in a special area of memory and be very small (many PIC16's have an 8-level deep call stack). In the 16-bit and 32-bit micros, the stack will usually be in general RAM and be limited only by the size of the RAM.

There are also vast differences in the amount of memory -- both program and RAM -- implemented on the various devices. 8-bit micros may only have a few hundred bytes of RAM, and a few thousand bytes of program memory (or much less -- for example the PIC10F320 has only 256 14-bit words of flash and 64 bytes of RAM). 16-bit micros may have a few thousand bytes of RAM, and tens of thousand of bytes of program memory. 32-bit micros often have over 64K bytes of RAM, and maybe 1/2 MB or more of program memory (the PIC32MZ2048 has 2 MB of flash and 512KB of RAM; the newly released PIC32MZ2064DAH176, optimized for graphics has 2 MB of flash and a whopping 32MB of on-chip RAM).

If you are programming in assembly language, the register-size limitations will be very evident, for example adding two 32-bit numbers is a chore on an 8-bit microcontroller but trivial on a 32-bit one. If you are programming in C, this will be largely transparent, but of course the underlying compiled code will be much larger for the 8-bitter.

I said largely transparent, because the size of various C data types may be different from one size micro to another; for example, a compiler which targets a 8 or 16-bit micro may use "int" to mean a 16-bit signed variable, and on a 32-bit micro this would be a 32-bit variable. So a lot of programs use #defines to explicitly say what the desired size is, such as "UINT16" for an unsigned 16-bit variable.

If you are programming in C, the biggest impact will be the size of you variables. For example, if you know a variable will always be less than 256 (or in the range -128 to 127 if signed), then you should use an 8-bit (unsigned char or char) on an 8-bit micro (e.g. PIC16) since using a larger size will be very inefficient. Likewise re 16-bit variables on a 16-bit micro (e.g. PIC24). If you are using a 32-bit micro (PIC32), then it doesn't really make any difference since the MIPS instruction set has byte, word, and double-word instructions. However on some 32-bit micros, if they lack such instructions, manipulating an 8-bit variable may be less efficient than a 32-bit one due to masking.

As forum member vsz pointed out, on systems where you have a variable that is larger than the default register size (e.g. a 16-bit variable on an 8-bit micro), and that variable is shared between two threads or between the base thread and an interrupt handler, one must make any operation (including just reading) on the variable atomic, that is make it appear to be done as one instruction. This is called a critical section. The standard way to mitigate this is to surround the critical section with a disable/enable interrupt pair.

So going from 32-bit systems to 16-bit, or 16-bit to 8-bit, any operations on variables of this type that are now larger than the default register size (but weren't before) need to be considered a critical section.

Another main difference, going from one PIC processor to another, is the handling of peripherals. This has less to do with word size and more to do with the type and number of resources allocated on each chip. In general, Microchip has tried to make the programming of the same peripheral used across different chips as similar as possible (e.g. timer0) , but there will always be differences. Using their peripheral libraries will hide these differences to a large extent. A final difference is the handling of interrupts. Again there is help here from the Microchip libraries.