R – Why don’t mainframe applications have bugs

mainframe

It seems like old iron is rock solid software. Why is that? Is it because the software is so mature, that all the bugs have been worked out? Or is it because people have gotten so used to the bugs that they don't even recognize them and work around them? Were the software specs perfect from day one and once the software was written, everything just worked? I'm trying to understand how we've come from mainframe computing days which everyone now espouses as just working to feeling that TDD is now the way to go.

Best Answer

Why on Earth do you think they don't have bugs?

IBM has a vast support infrastructure for bug reporting and resolution (PMRs, APARs and PTFs), which is heavily used.

Mainframe software which hasn't been touched for many years will certainly be well understood (at least in terms of its idiosyncrasies) and will likely have had many bugs either fixed or worked around. All of the new stuff being developed nowadays actually plans for a certain number of bugs and patches from GA (general availability) to at least GA + 36 months. In fact, an ex-boss of mine at IBM used to baulk at being forced to provide figures for planned bugs with the line: "We're not planning to have any bugs".

The mainframe espouses RAS principles (reliability, availability and serviceability) beyond what most desktop hardware and software could ever aspire to - that's only my opinion of course, but I'm right :-)

That's because IBM knows all too well that the cost of fixing bugs increases a great deal as you move through the development cycle - it's a lot cheaper to fix a bug in unit testing than it is to fix one in production, in terms of both money and reputation.

There's a great deal of effort and cost expended on only releasing bug-free software but even they don't get it right all the time.

Related Topic