DRAM, as you said, basically consists of a storage capacitor and a transistor to access the voltage stored on that capacitor. Ideally, the charge stored on that capacitor would never decrease, but there are leakage components that allow the charge to bleed off. If enough charge bleeds off the capacitor, then the data cannot be recovered. In normal operation, this loss of data is avoided by periodically refreshing the charge in the capacitor. This is why it is called Dynamic RAM.
Decreasing the temperature does a few things:
- It increases the threshold voltages of MOSFETs and the forward voltage drop of diodes.
- It decreases the leakage component of MOSFETs and diodes
- It improves the on-state performance of the MOSFETs
Considering that the first two points directly reduce the leakage current seen by the transistors, it should be less of a surprise that the charge stored in a DRAM bit can last long enough for a careful reboot process. Once power is reapplied, the internal DRAM system will maintain the stored values.
These basic premises can be applied to many different circuits, such as microcontrollers or even discrete circuits, as long as there isn't an initialization on start-up. Many microcontrollers, for example, will reset several registers on start-up, whether the previous contents were preserved or not. Large memory arrays are not likely to be initialized, but control registers are much more likely to have a reset on start-up function.
If you increase the temperature of the die hot enough, you can create the opposite effect, of having the charge decay so rapidly that the data is erased before the refresh cycle can maintain the data. However, this should not happen over the specified temperature range. Heating the memory hot enough for the data to decay faster than the refresh cycle could also cause the circuit to slow down to the point where it couldn't maintain the specified memory timings, which would appear as a different error.
This is not related to bit-rot. Bit-rot is either the physical degradation of storage media (CD, magnetic tapes, punch cards) or an event causing the memory to become corrupted, such as an ion impact.
It's not clear what you are really asking, but it is common to store binary values that are wider than the addressable unit of memory. For example, if you want to store 16 bit words in a byte-addressed memory, then you use two bytes. That also means the software has to know which byte is stored first.
In your case you seem to have native 4 bit words. To store a 16 bit value, use 4 words. Again, your software will have to know what convention you are using, like low word first or high word first.
Since your memory only has 4 address lines, its size is limited to 16 native words. Your native word size appears to be 4 bits, so your memory can hold only 64 bits. That means it will be completely used up by two 32 bit values or four 16 bit values, for example.
Best Answer
Yes, they really have that many capacitors in that small of an area.
There are two dominant technologies to do this: stacked capacitor DRAMs and trench capacitor DRAMs.
Stacked capacitors basically use a number of layers of metal and insulator to build a capacitor of reasonable capacity in a small surface area.
Trench capacitor DRAMs basically etch a "trench" (a deep, V-shaped one) in the silicon, the deposit a layer of metal, another of insulator, and another of metal.
Either way, you end up with a relatively large capacitance for the surface area. The capacitance is still quite small by most normal standards though. For example as of 2017, a Samsung DRAM has a capacitance around 7.4 fF per cell.
To get meaningful results from such a small capacitance, most DRAMs actually have some extra capacitors in addition to those used for the storage.
To read a cell, you charge one of these spare cells (one that's physically close to the cell you want to read) with approximately half the charge you'd use to store a
1
in a normal memory cell. One easy way to do that is to use two capacitor cells together, so feeding the same voltage and duration of charge pulse into them results in half the charge in the capacitor.Then you read back the values from the spare cell and the memory cell and feed them both into a differential amplifier (the "sense amp"). This helps cancel most common mode noise on the bit lines, so the signal coming out of the sense amp is a fairly clean low or high value, with substantially better noise immunity (and from it, improved reliability) compared to just reading the voltage from the capacitor by itself.
In addition, a typical DRAM will have some extra banks of memory. When the chip is being tested at the factory, they may find that one of the normal banks of memory has a defect. If so, the chip will typically include some fuses (or anti-fuses) that can be blown to substitute a spare bank for the defective one, so a chip can still usually meet spec, despite a defect or two.
Thus, a DRAM chip will typically have even more capacitors than you get from computing its size based on what it's rated to hold (though, in all honesty, the increase is fairly small, at least as a percentage--though with something like a 32 GB memory, even a small percentage works out to an absolute number that's fairly large).
As a final note: a DRAM chip has to have a fair amount of circuitry (decoders, sense amps, etc.) in addition to the DRAM cells themselves. As a really simple rule of thumb, figure that the actual cells occupy about half the chip area, and the associated circuitry the other half.