DRAM, as you said, basically consists of a storage capacitor and a transistor to access the voltage stored on that capacitor. Ideally, the charge stored on that capacitor would never decrease, but there are leakage components that allow the charge to bleed off. If enough charge bleeds off the capacitor, then the data cannot be recovered. In normal operation, this loss of data is avoided by periodically refreshing the charge in the capacitor. This is why it is called Dynamic RAM.
Decreasing the temperature does a few things:
- It increases the threshold voltages of MOSFETs and the forward voltage drop of diodes.
- It decreases the leakage component of MOSFETs and diodes
- It improves the on-state performance of the MOSFETs
Considering that the first two points directly reduce the leakage current seen by the transistors, it should be less of a surprise that the charge stored in a DRAM bit can last long enough for a careful reboot process. Once power is reapplied, the internal DRAM system will maintain the stored values.
These basic premises can be applied to many different circuits, such as microcontrollers or even discrete circuits, as long as there isn't an initialization on start-up. Many microcontrollers, for example, will reset several registers on start-up, whether the previous contents were preserved or not. Large memory arrays are not likely to be initialized, but control registers are much more likely to have a reset on start-up function.
If you increase the temperature of the die hot enough, you can create the opposite effect, of having the charge decay so rapidly that the data is erased before the refresh cycle can maintain the data. However, this should not happen over the specified temperature range. Heating the memory hot enough for the data to decay faster than the refresh cycle could also cause the circuit to slow down to the point where it couldn't maintain the specified memory timings, which would appear as a different error.
This is not related to bit-rot. Bit-rot is either the physical degradation of storage media (CD, magnetic tapes, punch cards) or an event causing the memory to become corrupted, such as an ion impact.
It's not clear what you are really asking, but it is common to store binary values that are wider than the addressable unit of memory. For example, if you want to store 16 bit words in a byte-addressed memory, then you use two bytes. That also means the software has to know which byte is stored first.
In your case you seem to have native 4 bit words. To store a 16 bit value, use 4 words. Again, your software will have to know what convention you are using, like low word first or high word first.
Since your memory only has 4 address lines, its size is limited to 16 native words. Your native word size appears to be 4 bits, so your memory can hold only 64 bits. That means it will be completely used up by two 32 bit values or four 16 bit values, for example.
Best Answer
Why refresh?
DRAM uses capacitors as storage cells. These capacitors, being really small and made from silicon, will leak off their voltage over time. That’s the D in DRAM: the cells are dynamic: their charge state changes.
To preserve the logic state of those leaky DRAM cells, their state must be read before their charge has bled off, then written back to bring their state to full, freshly-written charge. That’s refresh, in a nutshell.
To help deal with this, DRAMs implement a special kind of read-then-write cycle, called refresh, that hits multiple cells at once and writes them back. Typically, this is one or more rows of cells, about 1/256th of the DRAM at a time.
The host refresh operation is a race against time: all the DRAM rows have to be hit in time before their contents leak away. This usually works out to between 8 and 16ms to hit all the rows.
In contrast, Static RAM, or SRAM, uses a latch as a storage element. The latch keeps its state as long as the power is kept on or it’s written with a new value.
What does this mean with power and density?
SRAM can, in theory, have almost no standby power, as it uses a CMOS latch to store data. In practice, fast SRAM will have fairly high standby leakage current and even higher current during activity due to the use of low-threshold transistors to increase speed.
SRAM latches take between 4 and 8 transistors per bit, and all of them can leak.
More here: https://en.wikichip.org/wiki/static_random-access_memory
Meanwhile, DRAM has standby power to deal with refreshes. There’s considerable effort by chipmakers to offer low-power self-refresh modes that both stretches out the time between each refresh operation, and doesn’t require host intervention once that mode is entered. This self-refresh mode gets used in computer ‘sleep’ state, allowing CPU power-down yet enabling near-instant wake up time.
Density-wise, DRAM basically uses one transistor per cell, connecting to the capacitor which is dug vertically as a well into the silicon. This makes DRAM area per bit very small compared to the SRAM 6T or 8T latch cell. With fewer transistors, DRAM standby leakage per bit is also reduced.
More here: http://www.cse.scu.edu/~tschwarz/coen180/LN/DRAM.html#CellDesign
So overall, owing to its density and lower transistor count per bit, DRAM is substantially better power than fast SRAM; but substantially worse than slow, low-leakage SRAM because it requires refreshing.