Electronic – What limits the lower bound of DRAM CAS latency

dramtiming

When a dram module receives a read/write command (while a row is active) it needs to:

  1. decode the command along with bank and column.

  2. multiplex the bank and send command along.

  3. (when reading) move the data from the column's latches to the send buffer ready to shift out after the CL expires, barrel shifting along the way according to the column address.

    (when writing) associate the relevant places in the receive buffer to he correct bank and barrel shift.

All this together seems to take a consistent 10 ns on modern DDRx dram modules while there is a clock that goes 10 times faster.

What is the biggest bottleneck in this sequence and could it be improved significantly or is there something else I'm missing here?

Best Answer

What is the biggest bottleneck in this sequence

My guess would be parasitics. The copper read line and the read ampilfier would look like a R/C low pass filter when reading from a DRAM cell.

And since there are limits on how wide the copper line and how thick isolations can be, the R/C time constant stayed roughly the same even when the silicon structures got smaller in newer processes.

Trying to improve this time constant woud probably lower the data density too much, which is also an important parameter in DRAM design.