The main advantage of synchronous design is that it's behavior is easy to predict, model, and validate because everything happens on a predefined schedule. However, waiting for a specified time to perform an action makes synchronous design slower than a comparable asynchronous design. And even when the circuit is not responding to its logic inputs, it is still drawing power since it is responding to the clock signal.
An asynchronous circuit can be much faster because it responds to its inputs as they change. No waiting around for a clock signal before processing can take place. They also can take less power since they don't have anything to do when the inputs are inactive and have better EMI performance since there isn't a constant digital signal floating around. But the design of such systems is much more difficult because all combinations of inputs over time need to be taken into consideration to ensure proper operation of the circuit. When two inputs change at almost the same time, this is called a race condition and the circuit can have undefined behavior if the designer didn't plan for every combination of inputs at every combination of time.
Comparing and contrasting synchronous to asynchronous design, you're probably thinking that big companies like Samsung can spend billions on the research and design to fully model a DRAM circuit so that its operation is really stable and then we would have really fast, really low power memory. So why is SDRAM so much more popular?
While asynchronous design is faster than synchronous in sequential operations, it is much much easier to design a circuit to perform parallel or simulations operations if the operations are synchronous. And when many operations can be performed at the same time, the speed advantage of asynchronous design disappears.
So three main things to consider when designing a RAM circuit are speed, power, and ease of design. SDRAM wins over plain DRAM on two out of three of those and by a very large margin.
Wikipedia quotes:
Dynamic random-access memory -
The most significant change, and the primary reason that SDRAM has
supplanted asynchronous RAM, is the support for multiple internal
banks inside the DRAM chip. Using a few bits of "bank address" which
accompany each command, a second bank can be activated and begin
reading data while a read from the first bank is in progress. By
alternating banks, an SDRAM device can keep the data bus continuously
busy, in a way that asynchronous DRAM cannot.
Synchronous dynamic random-access memory -
Classic DRAM has an asynchronous interface, which means that it
responds as quickly as possible to changes in control inputs. SDRAM
has a synchronous interface, meaning that it waits for a clock signal
before responding to control inputs and is therefore synchronized with
the computer's system bus. The clock is used to drive an internal
finite state machine that pipelines incoming commands. The data
storage area is divided into several banks, allowing the chip to work
on several memory access commands at a time, interleaved among the
separate banks. This allows higher data access rates than an
asynchronous DRAM.
Pipelining means that the chip can accept a new
command before it has finished processing the previous one. In a
pipelined write, the write command can be immediately followed by
another command, without waiting for the data to be written to the
memory array. In a pipelined read, the requested data appears after a
fixed number of clock cycles after the read command (latency), clock
cycles during which additional commands can be sent.
Best Answer
Sounds like a very educational project.
Typically, DRAM manufacturers specify that each row must have its storage cell capacitors refreshed every 64 ms or less.
My understanding is that you can use any one of the following 4 ways to keep the DRAM refreshed:
(a) My understanding is that all SDRAM has an internal on-chip timer that automatically refreshes the SDRAM when the SDRAM is placed in self-refresh mode and the clock to the SRDRAM is stopped. My understanding is that most simple systems don't bother with self-refresh mode, and instead use one of the other methods of refreshing DRAM:
(b) Some systems have special refresh hardware that periodically pauses the CPU, performs a refresh, then resumes the CPU.
(c) A few systems have a special "refresh interrupt" -- a timer signal periodically triggers a hardware interrupt, and the software in the interrupt handler does a refresh, and returns. (Some systems interrupt once every 64 ms, and the interrupt handler reads N bytes -- one byte from every DRAM row -- refreshing all the DRAM in one whack, then returns. Other systems interrupt once every 64/N ms, increment a row counter and read one byte from that DRAM row, then return). The "refresh interrupt" approach requires the least amount of hardware. Alas, a "refresh interrupt" has the drawback that minor bugs in the refresh interrupt software, or bugs in any other software that delays the refresh interrupt too long, cause weird difficult-to-reproduce problems elsewhere when memory becomes corrupted.
(d) Many early computer systems had special DMA video hardware that pauses the CPU, reads video data from the DRAM, and sends it to the video hardware. Many of them are set up such that the process of reading out all the video data, as a side effect, also reads at least 1 byte from every row of DRAM, indirectly refreshing all DRAM.
p.s.: Are you actually using a (effectively) 32-bit 68000, like the original Macintosh and Palm Pilot? If so, you may find it useful to check out the Minimig project, which uses a 68000 (and lots of SRAM), and the FPGA "soft cores" that execute the 68000 instruction set.
Or are you actually using an 8-bit Motorola 6800? If so, I highly recommend you check out the N8VEM Home Brew Computer Project.