Electronic – SDRAM: Why CAS latency is configurable

sdramtiming

I've seen a couple of very similar questions but the answers don't answer my question:

DDR2 CAS Latency – is it fixed to clock-cycles or time?

What limits the lower bound of DRAM CAS latency

In my current understanding once a row in DRAM bank has been activated then it's just a matter of the values registered on the amplifiers to be latched out to the output and this process is synchronized by the clock, so it should be specified in clock cycles how long it takes to read a column from an active row.

Yet it appears CAS is time-based, e.g. this datasheet has CAS=1 for 166MHz, CAS=2 for 100MHz, CAS=3 for 50MHz and has CAS configurable, which in my understanding should be a constant number of clock cycles regardless of the clock frequency.

What is wrong with my understanding? What analog process the column reads in DRAM depend on?

Edit: researching on this a bit more I've found there's t_CAC, the time between the CAS pin is driven low and valid data is available on the data pins. Then t_CAS is always measured in a whole number of clock cycles and should be t_CAS >= t_CAC.
This makes me think the read process is actually asynchronous with purely combinational logic in between the sense amplifiers and the CAS pin and the data pins. Is this correct?

Best Answer

Inside the SDRAM chip, the actual CAS latency requirement is a combinatorial time delay, independent of the external interface's clock period. It may help to think of it as an old-fashioned asynchronous DRAM chip "wrapped" in a synchronous interface.

Since the bus master (CPU) can choose the interface clock speed, it makes sense to also allow it to configure the number of clocks to use for CAS latency, in order to get the best performance without dropping below the amount of time the chip requires internally.