It is not possible to make an electronic logic gate that functions even when its current is always zero.
However, it is possible to arrange CMOS electronic logic gates in such a way that the energy capacitively stored on the transistor gates is later returned to the power supply, so it is using almost zero net power. Once the system is powered up and all the bypass capacitors are fully charged, those logic gates can do an arbitrarily large amount of computation while pulling nearly zero current from the battery. Such arrangements are often called non-destructive computing.
Also, there are many ways to build logically equivalent computational structures without any electronic devices. Such non-electronic logic gates naturally use zero current, although nearly all of them require much more power to operate than their logically equivalent electronic logic gate.
non-electronic computing
Some non-electronic logic gates are listed in the article
"Ten weirdest computers".
A few more non-electronic logic gates that are apparently not quite weird enough to make that article:
David Cary has designed a CPU to be built entirely out of spool valves, and is still pondering whether to power the thing with traditional hydraulic oil pressure, water pressure, or air pressure.
The fluidic logic gates have no moving parts, if you don't count the fluid moving through them as a "part".
(Is there an article on Wikipedia or some other wiki with a list of ways to implement the abstract concept of a "logic gate" ?)
non-destructive computing
Non-destructive computing, also called reversible computing, Charge Recovery Logic, or Adiabatic Logic, involves gates that use almost zero power.
When a computational system erases a bit of information, it must dissipate a theoretical minimum energy of kT ln(2) -- the von Neumann-Landauer limit -- where k is Boltzmann's constant and T is the temperature.
Most logic gates erase a bit of information for every logic operation.
However, there are a few logic gates that preserve every bit.
In theory these non-destructive logic gates could use far less power than the theoretical minimum power of bit-destructive logic gates.
"Reversible Logic" by Ralph C. Merkle at Zyvex
RevComp - The Reversible and Quantum Computing Research Group
has some nice photos of their reversible CPU.
According to David Harris's presentation for eve224a course: (slides 6-11 and 47)
Delay d = f+p = g*h+p
Where d is process-independent delay, f is effort delay (stage effect), p is parasitic delay, g is logical effort, h is electrical effort (fanout; h = C_out/C_in)
In the Wikipedia article "Logical Effort" there are some examples too:
Delay in an inverter. By definition, the logical effort g of an inverter is 1
Delay in NAND and NOR gates. The logical effort of a two-input NAND gate is calculated to be g = 4/3
For NOT gate with FO1 (driving the same NOT gate):
g=1; h=1; p=1; so d = 1*1 + 1 = 2
For NOT gate with FO4 (the FO4 metric itself):
g=1; h=4 (Cout is 4 times more than Cin); p=1 so d = 1*4+1 =5 (the same result is at page 20 of books "Logical Effort: Designing Fast CMOS Circuits", draft from 1998)
1 FO4 delay is equal to 5 process-independent units (defined by harris, slide 6)
For NAND gate with two inputs (p=2) which drives the same:
g=4/3; h=1; p=2; d= 4/3 * 1 + 2 = 10/3 = 3,3 (a 1.5 times slower than NOT with FO1, but faster than NOT FO4)
For NAND gate asked by me - 2 inputs which drives 3 same NANDs:
g=4/3; h=3; p=2; d= (some magic inside) 4/3 * 3 + 2 = 6
So
Delay of 1 FO4 gate is equal to 5/6 delay of NAND (2-in, 3 FO).
The last problem is to convert chain delay of 18 NANDs to chain delay of FO4. (slide 41 of harris)
Hmm.. seems I need only to multiply 18 NANDs delay with 6/5... 21,6 FO4.
Thanks!
Best Answer
You're correct; that is not the right thing to do. The key is to take advantage of the ability to implement AND and OR via wiring. For example, here's a pull-down network that ORs together the results of two AND operations:
simulate this circuit – Schematic created using CircuitLab
Note that the OR doesn't require any extra transistors at all. ANDing together two ORs is similar. Using this method, you can implement ~(A~B + ~AB) using twelve transistors, including the inverters. There are more efficient ways that use transmission gates or dynamic logic, but I don't think you can go below twelve in traditional CMOS logic.