Electronic – FAT filesytem and flash endurance

flashstoragewear-leveling

I am considering replacing a SD card in a tiny embedded system (Cortex M4 running a RTOS) by some kind of dedicated flash storage chip. However, I am worried about the chip's endurance.

The embedded system uses FATfs and writes about 100 bytes about once every 1 minute. My understanding is that this use case is ok in a SD card: that's about 2 block writes every minute, so 10 million block writes over the product's 10 year life. That seems reasonable if I use a SD card with 40k 128kB blocks and about 1000 writes per block.

However, this assumes some kind of wear levelling, since a few disk sectors will be overwritten much more. The FAT data structures come to mind.

My question is, how would I go about replacing this SD Card by a flash chip like this? I'm particularly worried because I see no mention of wear leveling in the datasheets of flash chips. Should I migrate to another filesystem? Write raw bytes? Buy a chip with built-in wear leveling?

Best Answer

This is a NOR flash chip. NOR flash has lower endurance, but is more reliable as long as you stay within the limits. It is very appropriate for data that don't change much, but for which you can't really afford a bad block (firmware code, configuration data, ...). It isn't very appropriate for logging data, which implies continuous writes. Such NOR flash typically don't include any wear leveling layer, indeed (because they are, well... used in applications with few writes, so manufacturer assume it isn't generally needed, and it takes quite a bit of silicon logic).

I doubt you can find such a NOR flash chip with built-in wear-leveling anywhere.

BUT... Don't loose hope...

The thing is, what wears the chip is erasing data. Not really writing it. An erased block will read as all 'FF' bytes (all bits to '1'). Writing it just sets the bits you want to '0'. You can write the same sector multiple times, flipping some of the '1' bits to '0' each time. Erasing the sector will revert all the bits to '1'.

So, for data logging, it is quite possible to imagine some custom scheme where you write your 100 bytes each minute in sequence, each time using a "fresh" block of 100 bytes (by writing the same sector multiple times) and erasing sectors only when you don't have fresh blocks anymore. You also need to keep track of the location of the "latest" written block too, and you can do this also by flipping some bits from '0' to '1' in a particular "index" sector (which you'll also write multiple times without erasing between each writes).

This keeps the number of erasings to a strict minimum (eventually at the expense of some room temporarily taken by outdated data), and that is what will save the endurance.

That can be quite easy to implement depending on the pattern of writings you have to support. However, for something really dynamic (with files of potentially any size, that can be randomly written), it can quickly become complicated. But rather often, the needs are much simpler than this.