Electronic – Yields in DRAM and other Massively Redundant Processes

dramredundancy

I'm right now combing the electrical engineering literature on the sorts of strategies employed to reliably produce highly complex but also extremely fragile systems such as DRAM, where you have an array of many millions of components and where a single failure can brick the whole system.

It seems like a common strategy that's employed is the manufacturing of a much larger system, and then the selective disabling of damaged rows/columns using settable fuses. I've read[1] that (as of 2008) no DRAM module comes off the line functioning, and that for 1GB DDR3 modules, with all of the repair technologies in place, the overall yield goes from ~0% to around 70%.

That's just one data point, however. What I'm wondering is, is this something that gets advertised in the field? Is there a decent source for discussing the improvement in yield compared to the SoA? I have sources like this[2], that do a decent job of discussing yield from first principles reasoning, but that's 1991, and I imagine/hope that things are better now.

Additionally, is the use of redundant rows/columns still employed even today? How much additional board space does this redundancy technology require?

I've also been looking at other parallel systems like TFT displays. A colleague mentioned that Samsung, at one point, found it cheaper to manufacture broken displays and then repair them rather than improve their process to an acceptable yield. I've yet to find a decent source on this, however.

Refs

[1]: Gutmann, Ronald J, et al. Wafer Level 3-d Ics Process Technology. New York: Springer, 2008.
[2]: Horiguchi, Masahi, et al. "A flexible redundancy technique for high-density DRAMs." Solid-State Circuits, IEEE Journal of 26.1 (1991): 12-17.

Best Answer

No manufacturer will ever release yield data unless they have to for some reason. It's considered a trade secret. So- to answer your question directly, no- it isn't advertised in the industry.

However, there are many engineers whose jobs are to improve the line throughput and end-of-line yield. This often consists of using techniques like binning and block redundancy to make losses off the line function enough to be saleable. Block redundancy is certainly used today. It's pretty easy to analyze:

(failed blocks per part)/(blocks per part)*(failed blocks per part)/(blocks per part)

That'll get you the probability of both parallel blocks failing. I'd doubt you'd end up with a yield as low as 70%, as typically 90% is the minimum acceptable yield.

Related Topic