Electronic – DRAM timing with row and column decoders

drammemory

Consider a 64Kx1 DRAM memory which means the number of rows is 256 and the number of columns is 256. In other words, two 8×256 decoders are needed for selecting the right row and column.

Since, each memory location is 1-bit wide and we usually read 8 bits. Does that mean with a single row number, the column number must changed 8 times in order to read 8 bits?

I have seen timing diagrams for row and column strobes. Are these valid for one bit? That has a high timing overhead for multiple bits then. Isn't that?

Best Answer

Not usually. To read 8 bits, normal practice is to read one bit from each of 8 separate DRAMs.

However, if you were forced by cost or power considerations to use a single device, DRAMs of that era provided both burst mode and page mode, which allow you to provide the column number of the first bit you need, then automatically access adjacent bits in succeeding cycles - in page mode, up to all 256 bits in the currently open row.

(64k DRAMs are well over 25 years old! Where exactly is this question being dug up from - is this an archaeology question?)

The details of page and burst modes differ, and when L1/L2 caches became universal, burst modes evolved to address entire cache lines, wrapping the column address round by modular arithmetic rather than strictly ascending.

Page mode also allowed convenient shortcuts to designers of video cards of that timeframe (25 to maybe 15 years ago).

However, in newer DRAM designs, (the first DDR generation) page mode has quietly been dropped, leaving burst modes so you may have to address every eighth column individually at precisely the right time if you need larger groups of adjacent locations. This makes life more difficult if you're using DRAM without a cache/CPU combination, for example in FPGA applications.