Electronic – How to Transition from “Basic” Microcontrollers to ARM Cortex


I have many years of experience with 8-bit cores from various manufacturers – namely 8051, PIC, and AVR – and I now have a Cortex M0 to figure out. Specifically this one, but I hope we can be more general than that.

It's turning out to be a bit more than I bargained for, with multiple documents that describe different parts of the system in varying levels of detail and none really that I've seen to connect it all together. This compared to having one datasheet that explains everything. I understand having much more stuff to document in the first place, but the change in format is throwing me for a loop.

The website above has one document that's a good overview of each subsystem and peripheral in isolation, and another one that describes each register in detail, and I have all the source code for their SDK including header files and some complex examples, but I still see nothing that describes how it all connects together.

Is there a concise walkthrough of the Cortex architecture that explains the function of things that smaller controllers just don't have – like multiple layers of busses from CPU to peripherals, each with its own watchdog timer – and how they all connect together?

Best Answer

I've worked on AVRs as well as ARM Cortex-M3/M4/R4-based MCUs. I think I can offer some general advice. This will assume you're programming in C, not assembly.

The CPU is actually the easy part. The basic C data types will be different sizes, but you're using uint8/16/32_t anyway, right? :-) And now all integer types should be reasonably fast, with 32-bit (int) being the fastest. You probably don't have an FPU, so continue to avoid floats and doubles.

First, work on your understanding of the system-level architecture. This means IOs, clocking, memory, resets, and interrupts. Also, you need to get used to the idea of memory-mapped peripherals. On AVR you can avoid thinking about that because the registers have unique names with unique global variables defined for them. On more complex systems, it's common to refer to registers by a base address and an offset. It all boils down to pointer arithmetic. If you're not comfortable with pointers, start learning now.

For IOs, figure out how the peripheral muxing is handled. Is there a central mux control to select which pins are peripheral signals and which are GPIOs? Or do you set pins to peripheral mode using the peripheral registers? And of course you'll need to know how to configure GPIOs as inputs and outputs, and enable open-drain mode and pull-ups/downs. External interrupts usually fall into this category as well. GPIOs are pretty generic, so your experience should serve you well here.

Clocking boils down to a few things. You start with a clock source, typically a crystal or internal RC oscillator. This is used to create one or more system-level clock domains. Higher-speed chips will use a PLL, which you can think of as a frequency multiplier. There will be also clock dividers at various points. They key things to consider are what your CPU clock frequency should be and what bit rates you need for your communication peripherals. Usually this is pretty flexible. When you get more advanced, you can learn about things like low-power modes, which are usually based on clock gating.

Memory means flash and RAM. If you have enough RAM, it's often faster to keep your program there during early development so you don't have to program the flash over and over. The big issue here is memory management. Your vendor should provide sample linker scripts, but you might need to allocate more memory to code, constants, global variables, or the stack depending on the nature of your program. More advanced topics include code security and run-time flash programming.

Resets are pretty straightforward. Usually you only have to look out for the watchdog timer, which may be enabled by default. Resets are more important during debugging when you run the same code over and over. It's easy to miss a bug due to sequencing issues that way.

There are two things you need to know about interrupts -- how you enable and disable them, and how you configure the interrupt vectors. AVR-GCC does the latter for you with the ISR() macros, but on other architectures you might have to write a function address to a register manually.

Microcontroller peripherals are usually independent of each other, so you can learn them one at a time. It might help to pick one peripheral and use it to learn part of the system-level stuff. Comm peripherals and PWMs are good for clocking and IOs, and timers are good for interrupts.

Don't be intimidated by the level of complexity. Those "basic" microcontrollers have already taught you much of what you need to know. Please let me know if you need me to clarify anything.