CL minimum value is time based, but it is counted in clock cycles. So using it at a lower frequency you can presumably reduce CL - but if so, you must specify the actual CL value in the DRAM's mode register on initialisation; if the data sheet gives a mode reg bit pattern representing CL=5, then you are probably OK.
Also check that your reduced clock frequency is still within the permitted range for the part's internal DLL (there will be a minimum freq specified) but I think 270 MHz will be OK.
These details are from memory, but I did reliably operate DDR2 from a Virtex-5 FPGA at just under 200MHz so there is some leeway.
The answers here are good regarding how in normal practice the bitlines will be charged to VDD/2. However that doesn't really answer the question, because it:
does not apply all the time (depends on the cache requirements and process technologies. I have seen plenty of caches that precharge to VDD because at low voltage operation VDD/2 can be too risky)
The 'canonical case' everyone learns first doesn't precharge to VDD/2, and this is the situation he is asking. There is still a good reason they go to VDD, though.
The main reason they charge the bitlines HIGH (in the circuit he is showing) and let them discharge is because the pass transistors are NMOS. This means they pass a very solid '0' but they pass a degraded '1'.
So rather than start the bitlines low and let them pull up through the NMOS (slower and weaker, can only pull to VSUPPLY-VTH), they will start the bitlines high and let them pull down through the NMOS (which can pull down more strongly, to a solid '0').
Another very good reason are the constraints on transistor sizing which must be met for proper writability/readability.
Read operation: M1 must be stronger than M5, so that the voltage divider formed between M5/M1 does not flip the bitnode.
Write operation: M2 must be weaker than M5, so that M5 can overcome the feedback loop when writing a '1'.
So, M1 > M5 > M2 (and M3 > M6 > M4). The PMOS are the weakest transistors in the whole cell, why use that to pull up?
On top of that, traditionally NMOS have been faster than PMOS. This is less true today in the lower process technologies (22nm, 14nm, 10nm, etc) but is still normally assumed.
Best Answer
My guess would be parasitics. The copper read line and the read ampilfier would look like a R/C low pass filter when reading from a DRAM cell.
And since there are limits on how wide the copper line and how thick isolations can be, the R/C time constant stayed roughly the same even when the silicon structures got smaller in newer processes.
Trying to improve this time constant woud probably lower the data density too much, which is also an important parameter in DRAM design.